No ranking or 'league' table is ever going to be accurate. Such attempts at ranking institutions are excursions in futility. This broad criticism applies whether the ranking is produced by the Government or the private sector.
Let me explain my reasons:
A. Lack of General Moderator Device
Unlike schools, there is no common objective moderating device against which all graduates, across all discplines and in all degrees and at all institutions can be measured. This point is particularly underlined when international comparisons are attempted.
B. Methodology
The methodology used is always going to be biased in terms of what the particular researchers think most important: e.g. student-teacher ratio, PhD numbers among academic staff, number of journals published, student satisfaction, graduate starting salaries, government research grants etc.
As a result of this focus, the ranking is really only good (if good at all it be) for providing data on that particular aspect(s) of the institutions
and that aspect(s) alone. That is, a ranking that is biased in that it allocates a greater weighting to, say, student-teacher ratio can, at most, only provide guidance on that point, not on the quality of the institution as a whole. (Such qualitative judgement is subjectively extrapolated from that data.)
Often, such 'surveys' producing these rankings are completed by a
small sample of students, graduates or (as the Times one earlier this year was) academics actually in the universities. As such, the survey exposes itself to being unduly influenced by the fickleness of human subjectivity.
C. Application of 'Results' to Universities as a whole
I said above that there is no moderator to assess the outcomes of students. The same can be said even of students within the same university. There is no possible analytical comparison that can be drawn, objectively, between students studying in completley different degree programmes. For example, no person can say that the outcomes of a particular university for a vet science student, med student, arts student, commerce student will be the same, or even similiar.
Each of those students will have differing workloads, experience different staff ratios, different levels of industry exposure, quantitatively and qualitatively different resources and engage in campus life to differeing extents. To try to apply a general label is self-evidently ridiculous. Yet this is precisely what general league tables attempt to do.
The league table mentality is absurd. No definitive ranking of universities generally can ever be produced because, quite simply, the methodological barriers are too high. Ranking A might place Uni X at the top based on student satisfaction. Ranking B might place Uni Y at the top based on international recognition. Ranking C might place Uni Z at the top based on staff-student ratios. Who says any of those things constitute the right criterion by which universities should be ranked? No one can say with any generally applicable certainty.
Furthermore, who can say that the fact that Ranking A places Uni X at the top for criterion F generally, is any indication of the experience of criterion F in degree Q specifically? Again, no one can. The methodological and interpretive hurdles are too numerous and too high for effective league tables to ever be devised.
Let's all try to think a bit more rationally and dispose of this ranking nonsense once and for all.