The legal academy is on a precipice. As people seek to figure out exactly the mystery of what academics do, they want to come up with more metrics to determine which academics are good, and which academics are not. It’s like if Santa Claus were a management consultant with a basic understanding of stats.
To some degree, academia has endured measurement in terms of student evaluations. The good professors are the ones with good evaluations, and the bad ones are the ones who lack them. It’s only recently that people have discovered that which many have known for decades: Student evaluations are rigged, and you can pretty much guess the direction of the biases. Despite that, we still use them, apparently because measuring something poorly is way better than not measuring it at all.
Now, professors and university administrators are becoming more focused on measuring the impact of scholars. The term “scholarly impact” describes the complicated system of measuring whose work makes a difference, at least according to whatever metrics are used. In the old days, it was SSRN. Now, with U.S. News teaming with Heinonline, a new king of the metric is in town. And you’d be kidding yourself if you think it won’t be used to target some untenured professors and chide some tenured professors who think scholarly impact might be measured in a more meaningful way (or not at all). My coauthor and I have said our peace about these measures of “quality” here.
But universities are starting to measure faculty productivity. The alleged goal is quality, but I’m thinking the real goal is to produce “more stuff.”
The notion that we ought to measure output isn’t at all new. A common theme in the labor history of America is that firms attempt to increase worker productivity to make more profits, all the while competition assures that wages remain stagnant. The notion that we ought to maximize our scholarly impact isn’t new either. Economists might term that efficiency (or engineers might call it a constrained optimization problem).
The notion of efficiency, however, has always been skewed. In manufacturing, management would attempt to control production using technology. As one article describes, “In the hands of Taylorist managers and designed to be of use to them, new technology often became the prime means of controlling production After determining the one best way to do a job, managers searched for even greater production efficiencies in the form of new technology, which was developed and sold to their respective firms by others.” The article describes the process of using MOOCs (massive open online clusterf*cks) as Taylorism, but I think that the quest to assure there is some uniform metric of scholarly output serves the same purposes.
Universities seem keen to measure the worth of faculty endeavors using quantity as the goal. Some of these may not even be related to the metrics used by the university to determine tenure. So, faculty members must a) please their administrators by meeting those output metrics, b) please their school and perhaps their own egos by playing the scholarly impact “quality” metrics game, and c) also play a legitimate role in making the world a better place if they so choose. I’m not saying that these are all mutually exclusive, but the purposes of pushing out more stuff and getting a good impact score are not necessarily the same as making the world a better place, which was the ultimate goal of university education.
In many of the university metrics I’ve encountered across the lands, the university goals are about attempting to increase “stuff” and collect a not insubstantial amount of data about the rate of change in the promulgation of that stuff. The quest to increase production is a story often heard in industrialization, and it usually leads to automation, deskilling, and sometimes the eventual ruin of the industry. But hey, maybe this time it will be different? Usually the goals are described in terms of pursuing excellence or some other qualitative goal, as measured solely through the quantity of stuff. In short, we can be assured of our quality because we produce a lot of stuff.
But the quantity of “stuff” university administrators seem to want doesn’t necessarily mesh with their other desires. Please be on this committee. Please join us for this fundraising event. Please engage in a lot of service. We know how this game plays out, because we know who does the disproportionate share of the service. We know who gets sought after for that service. And thus we know who will be playing the “measuring stuff” game with their legs tied down with service weights.
And the other cost to this “measuring stuff” is academic freedom. For example, suppose my research is legal archaeology. That takes time and effort, perhaps to a greater degree than other methods. Thus, I might publish less. The university metrics suggests I’m less productive than my colleagues. Cool cool. (Note: Before I get emails, I don’t actually work in the legal archaeology realm.)
Making the world a better place might mean spending more time working with students, or writing something not counted in the “stuff” measure that targets the general population. In short, I fear that instead of focusing on making the world a better place, measuring “stuff” will lead to a more conformist academy (if that’s possible) and one whose direction has been handed over to university administrators and external data miners.
In other words, the notion that the same systems that were deployed to assure that I can buy a cheap TV that breaks more frequently too will somehow lead to improved quality in higher education is a pipe dream. And I, for one, won’t be playing this rigged game.
LawProfBlawg is an anonymous professor at a top 100 law school. You can see more of his musings here. He is way funnier on social media, he claims. Please follow him on Twitter (@lawprofblawg) or Facebook. Email him at lawprofblawg@gmail.com.