The rating is the hardest part
Pardon my bad Tom Petty “pun,” as I was going to be more direct with this title: “School ratings: F—ed data (as if you didn’t know).”
Stories have been piling up recently that yet again illuminate the hopelessness of school rankings.
One of the most compelling is a Colombia University professor’s finding that the data that plopped his own institution at the top of the U.S. News rankings pile were dubious. In short, a math professor, Michael Thaddeus, showed that Colombia “had provided fraudulent data to the magazine,” and the magazine unranked it. Then, as Akil Bello, director of the advocacy group FairTest, wrote in October in The Chronicle of Higher Education, after Colombia provided only some updated data, the editors “assigned competitive set values.” Bellow says of what the editors did: “In other words, the magazine made up data to keep a popular university in its rankings.”
In another snowballing story, numerous highly ranked law schools are withdrawing from their participation in the rankings.
The problem is fundamental: Once you think you’re going to make any sense of rating schools, you’re in the world of mirrors.
There are things I suppose you might measure with schools–or are there? I was going to start my list with an easy “number of teachers” metric and then paused, realizing even a seemingly straightforward stat like that might need exploration: Full-time or part-time? Tenure-track or not? How is teaching valued at the institution and how in fact is that measured? And then to think that data will turn into a useful value to a particular human being… geez, when you put it like that…
Sports are fun, and it’s no wonder we’re so obsessed with them in our fractured society. There is an objective, agreed upon (for the vast majority of cases) outcome. Elections are like that too. Someone wins. Someone loses. The outcome is clear and accepted.
But almost anything with even a shade more depth doesn’t lend itself to the “clear and accepted.” Look at the effort online dating systems have made to create match algorithms (to be clear: Not that I would know).
What’s the best place to live? The best ice cream? Greatest rock band? (alright, so that’s Led Zeppelin. Sorry). These are fun listicles that provide hours (and hours) of harmless argument. How about your best friend? Your perfect soulmate? Things are circumstantial. Schools are multi-layered, complex entities like that.
In “The Rankings Farce,” Reed College president Colin Diver powerfully decries this “rankocracy,” saying “the entire structure rests on mostly unaudited, self-reported information of dubious reliability.” Diver lists not just U.S. News but other publications’ efforts to rank colleges and says, “Taken individually, most of the factors are plausibly relevant to an evaluation of colleges. But one can readily see that any process purporting to produce a single comprehensive ranking of best colleges rests on a very shaky foundation.”
Diver outlines six problems with such systems, ranging from the selection of variables to the weighting of variables (as an example, U.S. News, he said, “decreed” that six-year graduation rates were worth “precisely” 17.6%) to the overall issue of having the “chutzpah” to claim that an arbitrary, ever-changing formula “can produce a single, all-purpose measure of institutional quality.”
But here we are, almost 2023, and this is still the way many people talk about not just colleges but schools all the way down the line. “How do you unring the bell of the socially accepted rankings?” Bello said in another Chronicle of Higher Education article, “Do the ‘U.S. News’ Rankings Rely on Dubious Data?” “That’s the biggest challenge right now — is that the ‘These colleges are good’ and ‘These colleges are bad’ has entered the ether of the higher-ed admissions landscape.”
You’re not picking a taco. You’re not buying a potato peeler. You’re not even buying a car. When it comes to selecting a school, you’re making a complicated decision. Don’t let anyone fool you otherwise.
Tweet