What’s our grade?

The results are in. How well did School Perceptions predict referendum outcomes this spring?


Let me frank right off the bat. I’m not good at writing blogs like today’s.


Speaking on behalf of the entire team at School Perceptions, we’re very proud of the work we do, but we’re also not the kind of group to heavily tout it either. Maybe it’s because we all have deep Midwestern roots, and we can all collectively envision our respective grandmothers scolding us for tooting our own horn. But let’s proceed anyhow.


A few weeks back, I wrote a short analysis of the April 2021 Wisconsin public school district referendum results. This week, I look specifically at School Perceptions in the context of these results.

How’d we do? In other words, did what we think was going to happen based on the survey results actually occur?


First, let’s get a few things out of the way. School Perceptions is not in the advocacy or campaign business. Consultants exist to do those things, and we’re not those people.


We’re an objective, independent firm that will be straight with you. If we believe a school referendum is going to pass based on the survey results that we collect, we’ll tell you that. If we believe a school referendum is going to fail based on the survey results that we collect, we’ll tell you that. In other words, we’ll give you unbiased information that you can use to make educated decisions from an informed perspective.


That last part is at the crux of what we do, so I’m going to repeat it. We’ll give you unbiased information that you can use to make educated decisions from an informed perspective. It’s what we’ve done for the last 20 years and what we’ll continue to do ahead.


When I first joined School Perceptions last year, I was on a call with Bill, our founder and president, and I heard him say that our predictions were very, very accurate. At the time, I had just finished graduate school doing my own predictive analytics and was filled with questions.


How accurate? How’d we do? How do we know?


(Bill, when you read this, you were right…)


Using the survey data we helped collect, we correctly predicted 95.4 percent of referendum outcomes this spring.


(We’re going to get a bit math-y for a second, so if you want to skip ahead, now’s your chance!

The table above presents various metrics related to the quality of our predictions. (If your math is a little rusty, I put a glossary at the bottom.)


In short, we want the green rows to be high, the red rows to be low, and the blue rows to total 100 percent. The two blue rows total the accuracy (the proportion of referendums we believed would pass and they did + the proportion of referendums we believed would fail and they did).


Data isn’t helpful if it’s not accurate, so we’re proud to hang out hat on this. Why is it that we’re able to predict these well? Because of our six building blocks.


1. A proven process

2. Credibility

3. A better strategy

4. Secure, user-friendly software

5. High response rates

6. A commitment to client service


If you’d like to learn more about our community surveys—either referendum-related or strategic planning—please contact us! We’d be happy to provide you with additional information.


Glossary


True Positive: the number of cases that were predicted to pass and were observed to pass.

False Positive: the number of cases that were predicted to pass but were observed to fail.

True Negative: the number of cases that were predicted to fail and were observed to fail.

False Negative: The number of cases that were predicted to fail but were observed to pass.

Sensitivity (True Positive Rate (TPR)): the true positive rate (true passages divided by observed passages).

Specificity (True Negative Rate (TNR)): The true negative rate (true failures divided by observed failures).

Accuracy: The number of true passages plus the number of true failures divided by the total cases.

False Positive Rate: The number of false passages divided by the observed failures or 1 – TNR.

Positive Predictive Value: The number of true passages divided by the number of predicted passages.

Negative Predictive Value: the number of true failures divided by the number of predicted failures.

The School Perceptions Blog and Resource Center features the voices of our team members. This post was written by Rob DeMeuse, Research Director & Project Manager.

16 views0 comments

Recent Posts

See All