We have gathered new data on employers' opinions of universities around the world. This has allowed us to widen the pool of information we present, but we have gone further and deepened the pool as well. This year's tables are virtually free of gaps in data. And because we have collected a wealth of data on institutions outside the top 200, we are confident that no institution that should be in these tables has been overlooked. These efforts have resulted in what we believe is the world's best guide to the standing of top universities.
The core of our analysis is peer review, which has long been accepted in academic life and across social research as the most reliable means of gauging institutional quality. The sample used to compile the peer-review column of this table comprises 2,375 research-active academics [i.e. generally not those involved in the teaching of undergraduates - MC]. They were chosen by QS Quacquarelli Symonds, consultants to The Times Higher and experts in international rankings of MBA courses. The selection was weighted so that just under a third of the academics came from each of the world's major economic regions - Asia, Europe and North America - with a smaller number from Africa and Latin America. It also had to yield roughly equal numbers from the main spheres of academic life: science, technology, biomedicine, social sciences and the arts. The selected academics were asked to name the top universities in the subject areas and the geographical regions in which they have expertise.
Data collected in 2005 were supplemented by opinions from our 2004 survey, where the same question was asked but no individual's opinion was counted twice. We believe that this two-year rolling average provides improved statistical reliability.
The information derived from the responses was used to generate the faculty-level data on the top institutions for specific subject areas published in The Times Higher this month and was aggregated to produce the peer-review column of the main table in this supplement. We are confident that the sample is large enough and sufficiently well chosen for its aggregate opinion to be statistically valid.
The point has been made that peer reviewers might be more likely to cite large old universities, especially those with the name of a major city in their titles, than smaller, less familiar ones. But the peers are all experts in their fields; and in their responses they rated as excellent more than 500 universities, some of which were unknown even to staff of The Times Higher. [That 2,375 people only managed to cite 'more than 500 of these lesser known institutions does not rebutt the preceeding admission of methodological weakness. The 'big city' influence could also extend to the area of University Presses - often the author of a book will be identified with the University that Published the book, which, given the Melbourne really has the only large-scal university press in the country is bound to skew results in their favour]
The peer-review data account for 40 per cent of the available score in the World University Rankings. This is 10 percentage points lower than in 2004 because of the addition of data on the opinion of major international employers of graduates. Like the other columns we show, and in an improvement on the presentation of the data in 2004, we have normalised these data to show the top institution scoring 100.
Two other columns of data in this table account for 20 per cent each of the final score for each university listed. One is the number of citations for academic papers generated by each staff member. This has been compiled from staff numbers collected by QS and citations data supplied by Evidence Ltd on the basis of data from Thomson Scientific. The citations data, which come from Thomson's Essential Science Indicators, cover the period between 1995 and 2005. A lower cut-off of 5,000 papers has been applied to eliminate small specialist institutions. This criterion provides a clear measure of universities' research prowess, but it has some systematic biases. It disadvantages some institutions, especially those in Asia, that publish few papers in the high-impact journals surveyed.
Teaching is, of course, central to the university mission. To gauge it, we consider a classic measure of commitment to teaching, the staff-to-student ratio, which is worth up to 20 percentage points. Like citations per staff member, this measure depends on accurate staff numbers. We believe we have improved the accuracy of the figures we collect. Nevertheless, any inconsistency is to some extent self-correcting because exaggerating staff numbers would increase a university's staff-to-student ratio but reduce its citations per staff member.
The principal motivation for the World University Rankings is our realisation that although scholarship has always been international, the world of higher education is becoming one of the most global sectors of the world economy. The final two columns of data we show, each accounting for 5 per cent of the total, attempt to quantify universities' international orientation. The first reflects their percentage of international staff and the second their percentage of international students.
Our aim in these tables is to rank large general universities. We have not counted institutions that do not teach undergraduates. This removes from the listing a number of high-prestige institutions, especially in medicine and business. We have, however, included universities that teach a broad but not a full complement of subjects. These range from the London School of Economics to a large number of technology universities.
A frequent query about the 2004 rankings concerned the level of detail they provided. In general, we have tried to tease apart large federal universities such as California or London that consist of many in essence free-standing colleges. But we have not been able to disaggregate the many US state universities that boast more than one campus. Doing so would have complicated the task too much.
We have managed to remove some ambiguities that were present last year by distinguishing between the Flemish-speaking and Francophone institutions of Belgium and by providing clearer labelling of the many universities of Paris and other French cities.
As research on composite tables such as these has shown, it is important to read them with care. It would be wrong to attribute too much weight to the small differences in overall scores between universities lower down the rankings.