Education is obsessed with data. It determines what success looks like for schools, administrators, teachers, and students. The metrics of interest, however, change constantly, and when one or two metrics have an outsized influence on funding and employment, it can lead all those involved to over-optimize the school and classroom to maximize these metrics at the expense of other meaningful aspects of quality education.
This is another way of expressing Campbell’s law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” The canonical example first brought up by Campbell was high-stakes testing. That was 44 years ago, and although the general landscape has changed considerably it has drifted closer to massive standardization and a culture of optimizing the processes of schools, districts, and states to game the tests.
This optimization is different at every level but, as a heuristic for thinking about the organization of schools and classrooms, it can be surprisingly effective in making sense of most systems. Personally, as a geometry teacher and TFA corpsmember, my performance is measured according to four different metrics: standardized unit math tests that come from the school network I work for, the ACT math test, the State Geometry exam, and student evaluations.
Compared to my own experience in high school, my class looks completely data obsessed. I show my students their collective data on all of their exams. As a class we talk through how we did and why we did how we did. We constantly practice and re-practice the tests. They get long review days because to prepare for most tests in my class.
In my school I talk to my boss biweekly about test scores. My student’s scores on the network wide exams are presented quarterly in front of all the other teachers. Then I meet semesterly with another coach to talk about my student evaluations which are also presented in front of my cohort of other TFA teachers. My class is fully optimized around preparing for the three different tests. The student evaluations have become completely secondary (which I have already written about as being biased, racist, sexist, and completely uncorrelated to student learning).
On a school and system wide level, most TN schools I know are focused primarily on the percent of students who get 21 or above on their ACT. This is the cutoff for getting any financial support from the state for college and it is often seen as the threshold for being considered college ready. This has led more schools to give practice ACT exams, but since it has become such a central indicator to the performance of many schools and districts it is also used to identify which students to prioritize. Once you get over the threshold of 21 you’re already part of the metric so although you are going to still be pushed to improve, the energy in the school gets shifted to students in the 18–20 stage who may feasibly make the bump into the 21+ tier.
The focus on ACT alone shows what we value. College preparation is the central goal of the system, and even though the percentage of 21+ is a pretty blunt metric, it’s close enough to measuring college preparation that we’ve built an entire system around it.
Now I don’t mean to say that this is all necessarily a bad thing. It’s just an unusual thing that I am trying to make sense of as a teacher. I’m not sure what the exact alternatives are and whether they are better. It seems like the central question is whether to structure your entire system around optimization of some metrics. If you don’t you’re left with some Taleb-ian localism — to be honest I’m not sure what that would look like for a structured education system but it would almost certainly look bad to the success measures we currently have. But if you do optimize to metrics you’re faced with the problem of how many and what metrics. For now, it looks like ACT and college prep are the focus of optimization, but it could be other gameable metrics.
For instance an education system that valued employment and financial stability could be built around the job rate of $40000+ after graduation. For instance, I have an old friend at Ohio State who studied welding education. He became a CTE teacher and got his students certifications in welding in their junior and senior years. He made $38,000. His students made $42,000+ when they graduated. To some that may sound like a failure in that kid didn’t go to college and started immediately working. To me that sounds like a success. It all depends on the values, but even that sort of metric is gameable.
The important cynical point is that the gaming, corruption, and over-optimization of social metrics like testing will always be there no matter what the metric. And since educational performance is so complicated we are likely going to focus on metrics that are better predicted by external factors that are out of the control of teachers, schools, or districts.