Tuesday, April 1, 2008

On US News law school rankings

Professor Brian Leiter, late of Texas Law, now at University of Chicago, is circulating an open letter to US News critical of its methodology in ranking law schools. [Link above] Leiter has long been an influential critic of these ratings, and has given more thought to the subject than most. I agree with many, although not quite all, of his points, but would raise some fundamental additional points. Here is a slightly edited version of an email I just sent to my UW law colleagues on the subject:

I find much to agree with in Leiter's specific comments and criticisms, but would go beyond them (as he probably would as well--we all have our priorities and areas of focus).
Perhaps a broader discussion among legal academics would be constructive...

For example: is it realistic, or constructive, to believe that all law schools try to do (let alone succeed in doing) the same thing? In its undergraduate ratings, US News differentiates national universities from national liberal arts colleges from various regional categories. In its medical ratings, US News distinguishes research-based programs from primary care programs (many schools are rated separately on each). US News' ratings of hospitals recognize that the best place to get a liver transplant is not necessarily the best place for a cosmetic procedure, or for emergency care. Is it time to also recognize parallel differences among legal institutions?

Is it realistic to believe that faculty at leading national law schools (or perhaps anywhere else) can make meaningful distinctions among, say, the bottom 125-150 (out of 184) ranked law schools? On the basis of what? (One presumes prior US News reports--the "echo chamber" to which Leiter refers?)

Are there many judges or practitioners out there (among the relatively few who return US News surveys) who have more than slight anecdotal experience with programs or graduates of more than a few dozen schools? What is the basis for their rankings of other institutions?

The gradations in scores of schools between about 25 and 50 are very fine, and these ratings tend to bounce around a fair bit from year to year, with significant ripple effects (see public relations releases, news stories, firings of Deans, etc.). Is there any reason to believe these are more than random fluctuations--or the results of the various gaming strategies to which Leiter makes reference? Can anyone really be confident that a difference of even ten slots in this range reflects anything consequential to the educational opportunities of students, or that the consequences of such differences in where students attend, or where they are employed, correspond to anything real?

Is it clear to anyone that the criteria applicable in differentiating meaningfully between slot #83 and slot #157 are the same criteria one would want to apply to ranking the top 10, or 15, or 20 schools?

To get more fundamental still--and this is a point on which I differ from Leiter--how clear is it that reputational differences as measured by citation analysis have anything much to do with the educational experience of students at many or most particular schools (or, for that matter, differences relevant to students following different professional paths in the law and adjacent fields--not everyone wants to be a highly paid wage slave at a legal factory)? Might one perhaps think that curricular emphases, styles of teaching and evaluation, emphasis on practice skills, clinical experiences, etc. have more to do with the quality of professional training than levels of pay for support staff (although those are not irrelevant, to the extent that undervalued and demoralized staff can affect the learning environment for both students and faculty)?

Might a more fundamental debate about such questions prepare the way for something better, and potentially more meaningful to students, and less destructive of other pedagogic values in what we try to accomplish as law teachers and as a law school? ...

Comments are welcome.

No comments: