Tuesday, 6 August 2013

The Complete Efficiency Rankings

The Complete Efficiency Rankings

At last. Here are complete Efficiency Rankings measuring the efficiency with which universities turn inputs into citations. I am using the method of Professor Dirk van Damme which is to divide the scores for Citations: Research Influence in The THE World University Rankings by the scores for Research: Volume, Income and Reputation. Here is the method as cited in a previous post:

"The input indicator takes scaled and normalised measures of research income and volume into account, and also considers reputation, while the output indicator looks at citations to institutional papers in Thomson Reuters’ Web of Science database, normalised for subject differences.
Professor van Damme said that the results - which show that university systems outside the Anglo-American elite are able to realise and increase outputs with much lower levels of input - did not surprise him.
“For example, Switzerland really invests in the right types of research. It has a few universities in which it concentrates resources, and they do very well,” he said.
Previous studies have found the UK to have the most efficient research system on measures of citation per researcher and per unit of spending.
But Professor van Damme explained that under his approach, productivity - output per staff member - was included as an input.
“With efficiency I mean the total research capacity of an institution, including its productivity, divided by its impact. The UK is not doing badly at all, but other countries are doing better, such as Ireland, which has a very low research score but a good citations score,” he said.
Given the severity of the country’s economic crisis, Ireland’s success was particularly impressive, he said.
“I think it is really conscious of the effort it has to make to maintain its position and is doing so.”
Low efficiency scores for China and South Korea reflected the countries’ problems in translating their huge investment into outputs, he added."




1. Tokyo Metroplitan
2. Moscow Engineering Physics Institute
3. Florida Institute of Technology
4. Southern Methodist
5. Hertfordshire
6. Portsmouth
7= King Mongkut's University of Technology
7= Vigo
9. Creighton
10. Fribourg
11. King Abdulaziz
12. University of the Andes
13. Trieste
14. Renmin
15. Medical University of Vienna
16. Polytech University Valencia
17= Beyreuth
18= Montana
19. Mainz
20. Ferrara
21. Drexel
22. Valencia
23. Linz
24. Crete
25. Colorado School of Mines
26. Technical University of Dresden
27. Innsbruck
28. Nurnberg
29= Dauphine
29= Wake Forest
29= Maryland Baltimore County
32. St George's London
33. William and Mary College
34. Hong Kong Baptist
35. Basel
36. Texas San Antonio
37. Duisberg
38. Lyon 1
39. Wurzburg
40. Charles Darwin
41. Wayne State
42. Northeastern
43. Bicocca
44. Royal Holloway
45. Koc
46. Georgia University of Health Science
47. Modena
48. Dundee
49. Southern Denmark
50= IIT Roorkhee
50= Pompeu Fabra
52. Graz
53= Oregon
53= Diderot
55. Bielfeld
56. Munster
57. Waikato
58= Grenoble
59= East Anglia
60= Bonn
61= Pavia
62. ENS Lyon
63. Eastern Finland
64. Padua
65. Brandeis
66. Aberystwyth
67. Tulane
68. Tubingen
69= Warsaw
70= Sun Yat Sen
71= Keele
72. Tromso
73. Brunel
74. Liege
75. Queen Mary
76= Vermont
77= Trento
78. Turin
79. Jyvaskyla
80. Carleton
81. Kansas
82. California Riverside
83. SUNY Stony Brook
84= George Washington
85= Pisa
86. Tasmania
87. George Mason
88. Boston College
89= Oregon State
90= Texas Dallas
91. Trinity College Dublin
92= University Science and Technology China
92= Murdoch
92= Cinncinati
92= Galway
92= Yeshiva
97= Tufts
97= Minho
99. Miami
100. Lehigh
101. Technical University Denmark
102= Rice
102= Iceland
104. California Santa Cruz
104= Milan
106. Monpellier 2
107. Frankfurt
108= Bergen
109= Strasbourg
110. Victoria
111. Rochester
112. Cork
113. Dartmouth
114. Oklahoma
115. Birkbeck
116. Porto
117. Canterbury
118= Newcastle UK
118= Notre Dame
118= University College Dublin
121. Binghamton
122. Aveiro
123= Kiel
123= Sussex
125. Temple
126. Aachen
127= Fribourg
127= Queens Belfast
127= Colorado Boulder
130. Iowa State
131. Tokyo Medical Dental
132= Autonomous Madrid
132= Swedish Agriculture
132= Tempere
135= Deakin
135= Barcelona
137= Stockholm
137= Stirling
139. Laval
140. Durham
141. Bangor
142= Aberdeen
142= Vanderbilt
144. Istanbul Technical
145. Nanjing
146= Exeter
146= Emory
146= Leicester
149. Southamton
150. Paris Mines
151. Vrije Universiteit Brussel
152. Polytechnic Milan
153. Kwazulu-Natal
154= Linkoping
154= Bilkent
154= Herriot-Watt
154= Bologna
158= Wyoming
158= Utah
158= Massey
161= Glasgow
161= Bern
163. ENS Paris
164. Zurich
165= Case Western Reserve
166= California Irvine
167= Tartu
168= Wellington
169= Salento
170. South Carolina
171. York UK
172. Aalto
173= Curie
173= Macquarie
173= Boston
176= Delaware
177= Copenhagen
178= Hannover
179. Norway University of Science and Technology
180. Antwerp
181= Dalhousie
181= Renselaer Polytechnic Institute
183= Konstanz
184= Paris Sud
185. Technical University Munich
186. Lancaster
187. Waseda
188. Otago
189. Arizona State
190= SUNY Albany
190= Gottingen
190= Autonomous Barcelona
193= Cape Town
194= St Andrews
195= Colorado State
195= Bath
195= Wollongong
198= Tsukuba
198= Simon Fraser
198= Liverpool
198= Umea
202= Geneva
202= Newcastle Australia
204= Universite Libre Bruxelles
204= Virginia
206= Lausanne
206= Louvain
208= Connecticut
208= Georgetown
208= York Canada
211. EPF Lausanne
212= North Carolina State
212= Bristol
212= Aalborg
212= Free University Amsterdam
216= Indiana
216= Kentucky
218. Maryland College Park
219. Karlsruhe Institute technology
220= University Technology Sydney
220= Iowa
222. Charles
223. Flinders
224. Cardiff
225= Auckland
225= Oslo
227. Pittsburgh
228= Heidelberg
228= Guelph
228= Washington State
228= Sheffield
232= Chinese University Hong Kong
232= Strathclyde
234= Ottawa
234= Gotherberg
234= Washington St Louis
237. Medical South Carolina
238= McMaster
238= Brown
238= National Sun Yat Sen
238= Reading
242. Ecole Polytechnique
243. Helsinki
244= Quebec
244= National Central Taiwan
246. Bogazici
247= Southern California
247= Arizona
249. Keio
250= Houston
250= Stellenbosch
250= Kings College London
250= Darmstadt
250= Western Australia
255= Pohang
255= IIT Bombay
257= Wageningen
257= Manitoba
259= South Australia
259= Nagoya
261= Leeds
261= UC Santa Barbara
261= Nijmegen
261= Jagiellon
265= New York University
265= Calgary
265= Ohio State
268. Aarhus
269= Witwatersrand
269= North Carolina Chapel Hill
269= Michigan State
269= Fudan
273= Bochum
273= Munich
275= SUNY Buffalo
275= Adelaide
275= Sapienza
278= Utrecht
278= Edinburgh
278= Queensland University of Technology
281= Lund
281= Ghent
283. Erasmus
284= Massachusetts
284= Illinois Chicago
284= Nottingham
287= Eindhoven
287= Amsterdam
289. UC San Diego
290. Birmingham
291= Western Ontario
291= Twente
293= Washington Seattle
293= Duke
295= Penn State
295= NUI Maynooth
297= Maastricht
297= Groningen
297= Columbia
297= Leiden
297= Georgia
302. UC Davis
303= Southern Florida
303= Chalmers University of Technology
305= Minnesota
305= Essex
305= Manchester
305= Georgia Institute of Technology
309= Rutgers
309= Texas at Austin
311= Northwestern
311= Warwick
311= Vienna
311= MIT
315. Johns Hopkins
316= Wisconsin Madison
316= Carnegie Mellon
318. Alberta
319. Pennsylvania
320= Hong Kong University of Science and Technology
320= Kyushu
322= Chicago
322= Vienna University of Technology
324= Queensland
324= Montreal
326. British Columbia
327= Yale
327= Imperial College London
327= UCLA
327= Hebrew University of Jerusalem
327= Karolinska
332= Melbourne
332= Humboldt
332= National Tsinghua Taiwan
332= Cambridge
332= Harvard
332= Stanford
338= Monash
338= Princeton
338= Caltech
338= Michigan
338= UC Berkeley
338= Cornell
344= Waterloo
344= KHT Sweden
344= Missouri
347. University College London
348= Oxford
348= Middle East Technical University
350. Yonsei
351= Toronto
351= Illinois Urbana Champagne
351= Peking
351= Leuven
355= Zhejiang
355= Hokkaido
355= Hong Kong Polytechnic University
355= McGill
359= ETH Zurich
359= Tokyo Institute of Technology
361= Berlin
361= Uppsala
363= Korea
363= Sydney
365= Florida
365= New South Wales
367= Australian National
367= Tohoku
367= Purdue
367= Technion
371= Surrey
371= IIT Kharagpur
373= KAIST
373= Texas A and M
375. Virginia Polytechnic Institute
376= Osaka
376= Nanyang Technological University
376= Shanghai Jiao Tong
379. LSE
380. Sungkyunkwan
381. Sharif University of Technology
382. Tokyo
383= National Taiwan University of Science and Technology
383= National Autonomous University of Mexico
385= Kyoto
385= National University of Singapore
387. Loughborough
388. National Cheng Kung
389. Tel Aviv
390= Hong Kong
390= Tsinghua
392. Chinese University of Hong Kong
393. National Taiwan
394. National Chiao Tung
395. Tilburg
396. Delft
397. Seoul National
398. State University Campinas
399. Sao Paulo
400. Moscow State

What about a Research Influence Ranking?

Keeping up with the current surge of global university rankings is becoming next to impossible. Still there are a few niches that have remained unoccupied. One might be a ranking of universities according to their ability to spread new knowledge around the world. So it might be a good idea to have a Research Influence Ranking based on the citations indicator in the Times Higher Education -- Thomson Reuters World University Rankings.

Thomson Reuters are the world's leading collectors and analysts of citations data so such an index ought to provide invaluable data source for governments, corporations and other stakeholders deciding where to place research funding. Data for 400 universities can be found on the THE iPhone/iPad app.

The top place in the world would be jointly held by Rice University in Texas and Moscow State Engineering Physics Institute, closely followed by MIT and the University of California Santa Cruz.

Then there are the first places in various regions and counties. (MEPhI would be first in Europe and Rice in the US and North America.)


Canada
University of Toronto

Latin America
University of the Andes, Colombia

United Kingdom (and Western Europe)
Royal Holloway London

Africa
University of Cape Town 

Middle East
Koc University, Turkey

Asia (and Japan)
Tokyo Metropolitan University

ASEAN
King Mongkut's University of Technology, Thailand

Australia and the Pacific
University of Melbourne

On second thoughts, perhaps not such a good idea.

The Efficiency Rankings

Times Higher Education has a story about a study by Dirk Van Damme, head of the Centre for Educational Research and Innovation at the OECD. This will be presented at the Global University Summit held in Whitehall, London from the 28th to the 30th May.

The Summit "brings an invitation-only audience of leaders from the world’s foremost universities, senior policy-makers and international business executives to London in 2013." It is a "prestigious event" held in a "spectacular setting" and is sponsored by the University of Warwick, Times Higher Education, Thomson Reuters and UK Universities International Unit. Speakers include Vince Cable, Boris Johnson, the Russian ambassador and heads of various universities from around the world.

What Professor Van Damme has done is to treat the THE World University Rankings Research Indicator scores as an input and the Research Influence (Citations) scores as an output. The output scores are divided by the input scores and the result is a measure of the efficiency with which the inputs are turned into citations, which, as we all know, is the main function of the modern university.

According to THE:

"The input indicator takes scaled and normalised measures of research income and volume into account, and also considers reputation, while the output indicator looks at citations to institutional papers in Thomson Reuters’ Web of Science database, normalised for subject differences.
Professor van Damme said that the results - which show that university systems outside the Anglo-American elite are able to realise and increase outputs with much lower levels of input - did not surprise him.
“For example, Switzerland really invests in the right types of research. It has a few universities in which it concentrates resources, and they do very well,” he said.
Previous studies have found the UK to have the most efficient research system on measures of citation per researcher and per unit of spending.
But Professor van Damme explained that under his approach, productivity - output per staff member - was included as an input.
“With efficiency I mean the total research capacity of an institution, including its productivity, divided by its impact. The UK is not doing badly at all, but other countries are doing better, such as Ireland, which has a very low research score but a good citations score,” he said.
Given the severity of the country’s economic crisis, Ireland’s success was particularly impressive, he said.
“I think it is really conscious of the effort it has to make to maintain its position and is doing so.”
Low efficiency scores for China and South Korea reflected the countries’ problems in translating their huge investment into outputs, he added."

One hesitates to be negative about a paper presented at a prestigious event in a spectacular setting to an invitation only audience but this is frankly rather silly.

I would accept that income can be regarded as an input but surely not reputation and surely not volume of publications. Also, unless Van Damme's methodology has undisclosed refinements he is treating research scores as having the same value regardless of whether they are composed mainly of scores for reputation or for number of publications or for research income.

Then there is the time period concerned. Research income is income for one year, Publications are drawn from a five year period. These are then compared with citations over a six year period. So the paper is asking how research income for 2010 produces citations in the years 2006 - 2011 of papers published in the years 2006 - 2010. A university is certainly being remarkably efficient if its 2010 income is producing citations in 2006, 2007, 2008 and 2009.

Turning to the citations side of the equation, it should be recalled that the THE citations indicator includes an adjustment by which the citation impact score for universities is divided by the square root of the citation impact score for the country as a whole. In other words a university located in a country where papers are not cited very much gets a big boost and the lower the national citation impact score the bigger the boost for the university. This is why Hong Kong universities suffered reduced scores when Thomson Reuters took them out of China when counting citations and put them in their own separate category.

So, it is not surprising that universities from outside the Anglo-Saxon elite do well for citations and thus appear to be very efficient. Thomson Reuters methodology gives such universities a very substantial weighting just for being located in countries that are less productive in terms of citations.

None of this is new. In 2010 Van Damme did something similar at a seminar in London.

Van Damme is just analysing the top 200 universities in the THE rankings. It would surely be more interesting to analyse the top 400 whose scores are obtainable from an iPad/iPhone app.

So here are the top ten universities in the world according to the efficiency with which they turn income, reputation and publications into citations. The procedure is to divide the citations score from the 2012 THE rankings by the research indicator score.

1. Tokyo Metropolitan University
2. Moscow State Engineering Physics Institute
3. Florida Institute of Technology
4. Southern Methodist University
5. University of Hertfordshire
6. University of Portsmouth
7. King Mongkut's University of Technology
8. Vigo University
9. Creighton University
10. Fribourg University

No doubt the good and the great of the academic world assembled in Whitehall will make a trip to Portsmouth or even to Vigo or Creighton if they can find them on the map.

And now for the hall of shame. Here are the bottom ten of the THE top 400, ranked according to efficiency as measured by citations indicator scores divided by research scores. The heads of these failing institutions will no doubt be packing their bags and looking for jobs as junior administrative assistants at technical colleges in Siberia or the upper Amazon


391. Tsinghua University
392. Chinese University of Hong Kong
393. National Taiwan University
394. National Chiao Tung University
395. Tilburg University
396. Delft University of Technology
397. Seoul National University
398. State University of Campinas
399. Sao Paulo University
400. Lomosonov Moscow State University

In a little while I hope to publish the full 400 after I have finished being sarcastic about the QS subject rankings.

QS Rankings by Subject

QS have produced their annual subject rankings. At the top there are no real surprises and, while there is certainly room for argument, I do not think that anyone will be shocked by the top ten or twenty in each subject.


The university with the most number ones is Harvard:

Medicine
Biology
Psychology
Pharmacy and Pharmacology
Earth and Marine Sciences
Politics and International Studies
Law
Economics and Econometrics
Accounting and Finance
Education

MIT has seven:
Computer Science
Chemical Engineering
Electrical Engineering
Mechanical Engineering
Phys and Astronomy
Chemistry
Materials Science

Then there is Berkeley with exactly the four you would expect:
Environmental Science
Statistics and Operational Research
Sociology
Communication and Media Studies

Oxford has three:

Philosophy
Modern Languages
Geography

Cambridge another three:
History
Linguistics
Mathematics


Imperial College London is top for Civil Engineering and University of California, Davis for Agriculture and Forestry.


These rankings are based on the academic opinion survey, the employer survey, citations per paper and h-index, a measure of both output and influence that eliminates outliers, in proportions that vary for each subject. They are very research-focused which is unfortunate since there seems to be a consensus emerging at conferences and seminars that the THE-TR rankings are for policy makers, the Shanghai ARWU for researchers and the QS rankings for undergraduate students.

Outside the top fifty there are some oddities resulting form the small number of responses when we leave the top fifty or top one hundred. I will leave it to specialists to find them.

Serious Wonkiness


Alex Usher at HESA had a post on the recent THE Under-50 Rankings. Here is an except about the Reputation and Citations indicators.



"But there is some serious wonkiness in the statistics behind this year’s rankings which bear some scrutiny. Oddly enough, they don’t come from the reputational survey, which is the most obvious source of data wonkiness. Twenty-two percent of institutional scores in this ranking come from the reputational ranking; and yet in the THE’s reputation rankings (which uses the same data) not a single one of the universities listed here had a reputational score high enough that the THE felt comfortable releasing the data. To put this another way: the THE seemingly does not believe that the differences in institutional scores among the Under-50 crowd are actually meaningful. Hmmm.

No, the real weirdness in this year’s rankings comes in citations, the one category which should be invulnerable to institutional gaming. These scores are based on field-normalized, 5-year citation averages; the resulting institutional scores are then themselves standardized (technically, they are what are known as z-scores). By design, they just shouldn’t move that much in a single year. So what to make of the fact that the University of Warwick’s citation score jumped 31% in a single year, Nanyang Polytechnic’s by 58%, or UT Dallas’ by a frankly insane 93%? For that last one to be true, Dallas would have needed to have had 5 times as many citations in 2011 as it did in 2005. I haven’t checked or anything, but unless the whole faculty is on stims, that probably didn’t happen. So there’s something funny going on here."

Here is my comment on his post.


Your comment at University Ranking Watch and your post at your blog raise a number of interesting issues about the citations indicator in the THE-TR World University Rankings and the various spin-offs.



You point out that the scores for the citations indicator rose at an unrealistic rate between 2011 and 2012 for some of the new universities in the 100 Under 50 Rankings and ask how this could possibly reflect an equivalent rise in the number of citations.



Part of the explanation is that the scores for all indicators and nearly all universities in the WUR, and not just for the citations indicator and a few institutions, rose between 2011 and 2012. The mean overall score of the top 402 universities in 2011 was 44.3 and for the top 400 universities in 2012 it was 49.5.



The mean scores for every single indicator or group of indicators in the top 400 (402 in 2011) have also risen although not all at the same rate. Teaching rose from 37.9 to 41.7, International Outlook from 51.3 to 52.4, Industry Income from 47.1 to 50.7, Research from 36.2 to 40.8 and Citations from 57.2 to 65.2.



Notice that the scores for citations are higher than for the other indicators in 2011 and that the gap further increases in 2012.



This means that the citations indicator had a disproportionate effect on the rankings in 2011, one that became more disproportionate in 2012



It should be remembered that the scores for the indicators are z scores and therefore they measure not the absolute number of citations but the distance in standard deviations from the mean number of normalised citations of all the universities analysed. The mean is the mean not of the 200 universities listed in the top 200 universities in the printed and online rankings or the 400 included in the ipad/iphone app but the mean of the total number of universities that have asked to be ranked. That seems to have increased by a few hundred between 2011 and 2012 and will no doubt go on increasing over the next few years but probably at a steadily decreasing rate.



Most of the newcomers to the world rankings have overall scores and indicator scores that are lower than those of the universities in the top 200 or even the top 400. That means that the mean of the unprocessed scores on which the z scores are based decreased between 2011 and 2012 so that the overall and indicator scores of the elite universities increased regardless of what happened to the underlying raw data.



However, they did not increase at the same rate. The scores for the citations indication, as noted, were much higher in 2011 and in 2012 than they were for the other indicators. It is likely that this was because the difference between top 200 or 400 universities and those just below the elite is greater for citations than it is for indicators like income, publications and internationalisation. After all, most people would probably accept that internationally recognised research is a major factor in distinguishing world class universities from those that are merely good.



Another point about the citations indicator is that after the score for field and year normalised citations for each university is calculated it is adjusted according to a “regional modification”. This means that the score, after normalisation for year and field, is divided by the square root of the average for the country in which the university is located. So if University A has a score of 3.0 citations per paper and the average for the country is 3.0 then the score will be divided by 1.73, the square of 3, and the result is 1.73. If a university in country B has the same score of 3.0 citations per paper but the overall average is just 1.0 citation per paper the final score will be 3.0 divided by the square root of 1, which is 1, and the result is 3.



University B therefore gets a much higher final score for citations even though the number of citations per paper is exactly the same as University A’s . The reason for the apparently higher score is simply that the two universities are being compared to all the other universities in their country. The lower the score for universities in general then the higher the regional modification for specific universities. The citations indicator is not just measuring the number of citations produced by universities but also in effect the difference between the bulk of a country’s universities and the elite that make into the top 200 or 400.



It is possible then that a university might be helped into the top 200 or 400 by having a high score for citations that resulted from being better than other universities in a particular country that were performing badly.



It is also possible that if a country’s research performance took a dive, perhaps because of budget cuts, with the overall number of citations per paper declining, this would lead to an improvement in the score for citations of a university that managed to remain above the national average.



It is quite likely that -- assuming the methodology remains unchanged -- if countries like Italy, Portugal or Greece experience a fall in research output as a result of economic crises, their top universities will get a boost for citations because they are benchmarked against a lower national average.



Looking at the specific places mentioned, it should be noted once again that Thomson Reuters do not simply count the number of citations per paper but compare them with the mean citations for papers in particular fields published in particular years and cited in particular years.



Thus a paper in applied mathematics published in a journal in 2007 and cited in 2007, 2008, 2009, 2010, 2011 and 2012 will be compared to all papers in applied maths published in 2007 and cited in those years.



If it is usual for a paper in a specific field to receive few citations in the year of publication or the year after then even a moderate amount of citations can have a disproportionate effect on the citations score.



It is very likely that Warwick’s increased score for citations in 2012 had a lot to do with participation in a number of large scale astrophysical projects that involved many institutions and produced a larger than average number of citations in the years after publication. In June 2009, for example, the Astrophysical Journal Supplement Series published ‘The seventh data release of the Sloan Digital Sky Survey’ with contributions from 102 institutions, including Warwick. In 2009 it received 45 citations. The average for the journal was 13. The average for the field is known to Thomson Reuters but it is unlikely that anyone else has the technical capability to work it out. In 2010 the paper was cited 262 times: the average for the journal was 22. In 2011 it was cited 392 times: the average for the journal was 19 times.



This and similar publications have contributed to an improved performance for Warwick, one that was enhanced by the relatively modest number of total publications by which the normalised citations were divided.



With regard to Nanyang Technological University, it seems that a significant role was played by a few highly cited publications in Chemical Reviews in 2009 and in Nature in 2009 and 2010.



As for the University of Texas at Dallas, my suspicion was that publications by faculty at the University of Texas Southwestern Medical Center had been included, a claim that had been made about the QS rankings a few years ago. Thomson Reuters have, however, denied this and say they have observed unusual behaviour by UT Dallas which they interpret as an improvement in the way that affiliations are recorded. I am not sure exactly what this means but assume that the improvement in the citations score is an artefact of changes in the way data is recorded rather than any change in the number or quality of citations.



There will almost certainly be more of this in the 2013 and 2014 rankings."

A bad idea but not really new

University teachers everywhere are subject to this sort of pressure but it is unusual for it to be stated so explicitly.




"A university put forward plans to assess academics’ performance according to the number of students receiving at least a 2:1 for their modules, Times Higher Education can reveal.
According to draft guidance notes issued by the University of Surrey - and seen by THE - academics were to be required to demonstrate a “personal contribution towards achieving excellence in assessment and feedback” during their annual appraisals.
Staff were to be judged on the “percentage of students receiving a mark of 60 per cent or above for each module taught”, according to the guidance form, issued in June 2012, which was prefaced by a foreword from Sir Christopher Snowden, Surrey’s vice-chancellor, who will be president of Universities UK from 1 August.
“The intention of this target is not to inflate grades unjustifiably but to ensure that levels of good degrees sit comfortably within subject benchmarks and against comparator institutions,” the document explained.
After “extensive negotiations” with trade unions, Surrey dropped the proposed “average target mark”, with replacement guidance instead recommending that staff show there to be “a normal distribution of marks” among students."

Competition and controversy in global rankings

Higher education is becoming more competitive by the day. Universities are scrambling for scarce research funds and public support. They are trying to recruit from increasingly suspicious and cynical students. The spectre of online education is haunting all but the most confident institutions.


Rankings are also increasingly competitive. Universities need validation that will attract students and big-name researchers and justify appeals for public largesse. Students need guidance about where to take their loans and scholarships. Government agencies have to figure out where public funds are going.

It is not just that the overall rankings are competing with one another, but also that a range of subsidiary products have been let loose. Times Higher Education (THE) and QS have released Young University Rankings within days of each other. Both have published Asian rankings. THE has published reputation rankings and QS Latin American rankings. QS’s subject rankings have been enormously popular because they provide something for almost everybody.

There are few countries without a university somewhere that cannot claim to be in the top 200 for something, even though these rankings sometimes manage to find quality in places lacking even departments in the relevant fields.

QS’s academic survey

Increasing competition can also be seen in the growing vehemence of the criticism directed against and between rankings, although there is one ranking organisation that so far seems exempt from criticism. The QS academic survey has recently come under fire from well-known academics although it has been scrutinised byUniversity Ranking Watch and other blogs since 2006.

It has been reported by Inside Higher Ed that QS had beensoliciting opinions for its academic survey from a US money-for-surveys company that also sought consumer opinion about frozen foods and toilet paper.

The same news story revealed that University College Cork had been trying to find outside facultyto nominate the college in this year’s academic survey.

QS has been strongly criticised by Professor Simon Marginson of the University of Melbourne, who assigns it to a unique category among national and international ranking systems, saying, “I do think social science-wise it’s so weak that you can’t take the results seriously”.

This in turn was followed by a heated exchange between Ben Sowter of QS and Marginson.

Although it is hard to disagree with Marginson’s characterisation of the QS rankings, it is strange he should consider their shortcomings to be unique.

U-Multirank and the Lords

Another sign of intensifying competition is the response toproposals for U-Multirank. This is basically a proposal, sponsored by the European Union, not for a league table in which an overall winner is declared but for a series of measures that would assess a much broader range of features, including student satisfaction and regional involvement, than rankings have offered so far.

There are obviously problems with this, especially with the reliance on data generated by universities themselves, but the disapproval of the British educational establishment has been surprising and perhaps just a little self-serving and hypocritical.

In 2011, the European Union Committee of the House of Lords took evidence from a variety of groups about various aspects of European higher education, including U-Multirank. Among the witnesses was the Russell Group of elite research intensive universities, formed after many polytechnics were upgraded to universities in 1992.

The idea was to make sure that research funding remained in the hands of those who deserved it. The group, named after the four-star Russell Hotel in a “prestigious location in London” where it first met, is not an inexpensive club: recently the Universities of Exeter, Durham and York and Queen Mary College paid £500,000 apiece to join.

The Lords also took evidence from the British Council, the Higher Education Funding Council for England, the UK and Scottish governments, the National Union of Students and Times Higher Education.

The committee’s report was generally negative about U-Multirank, stating that the Russell Group had said "ranking universities is fraught with difficulties and we have many concerns about the accuracy of any ranking”.

“It is very difficult to capture fully in numerical terms the performance of universities and their contribution to knowledge, to the world economy and to society,” the report said. “Making meaningful comparisons of universities both within, and across, national borders is a tough and complex challenge, not least because of issues relating to the robustness and comparability of data.”

Other witnesses claimed there was a lack of clarity about the proposal’s ultimate objectives, that the ranking market was too crowded, that it would confuse applicants and be “incapable of responding to rapidly changing circumstances in institutional profiles”, that it would “not allow different strengths across diverse institutions to be recognised and utilised” and that money was better spent on other things.

The committee also observed that the UK Government’s Department of Business Innovation and Skills was “not convinced that it [U-Multirank] would add value if it simply resulted in an additional European ranking system alongside the existing international ranking systems” and the minister struck a less positive tone when he told us that U-Multirank could be viewed as "an attempt by the EU Commission to fix a set of rankings in which [European universities] do better than [they] appear to do in the conventional rankings”.

Just why should the British government be so bothered about a ranking tool that might show European (presumably they mean continental here) universities doing better than in existing rankings?

Finally, the committee reported that “(w)e were interested to note that THES (sic) have recently revised their global rankings in 2010 in order to apply a different methodology and include a wider range of performance indicators (up from six to 13)”.

The committee continued: “They told us that their approach seeks to achieve more objectivity by capturing the full range of a global university's activities – research, teaching, knowledge transfer and internationalisation – and allows users to rank institutions (including 178 in Europe) against five separate criteria: teaching (the learning environment rather than quality); international outlook (staff, students and research); industry income (innovation); research (volume income and reputation); and citations (research influence).”

It is noticeable the Lords showed not the slightest concern, even if they were aware of it, about the THE rankings’ apparent discovery in 2010 that the world’s fourth most influential university for research was Alexandria University.

The complaints about U-Multirank seem insubstantial, if not actually incorrect. The committee’s report says the rankings field is overcrowded. Not really: there are only two international rankings that make even the slightest attempt to assess anything to do with teaching. The THE World University Rankings included only 178 European universities in 2011 so there is definitely a niche for a ranking that aims at including up to 500 European universities and includes a broader range of criteria.

All of the other complaints about U-Multirank, especially reliance on data collected from institutions, would apply to the THE and QS rankings, although perhaps in some cases to a somewhat lesser extent. The suggestion that U-Multirank is wasting money is ridiculous; €2 million would not even pay for four subscriptions to the Russell Group.

Debate

In the ensuing debate in the Lords there was predictable scepticism about the U-Multirank proposal, although Baroness Young of Hornsey was quite uncritical about the THE rankings, declaring that “ (w)e noted, however, that existing rankings, which depend on multiple indicators such as the Times Higher Educationworld university rankings, can make a valuable contribution to assessing the relative merits of universities around the world”.

In February, the League of European Research Universities, or LERU, which includes Oxford, Cambridge and Edinburgh, announced it would have nothing to do with the U-Multirank project.

Its secretary general said "(w)e consider U-Multirank, at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system". He went on to talk about "the lack of reliable, solid and valid data for the chosen indicators in U-Multirank”, about the comparability between countries, about the burden put upon universities to collect data and about “the lack of 'reality-checks' in the process thus far".

In May, the issue resurfaced when the UK Higher Education International Unit, which is funded by British universities and various government agencies, issued a policy statement that repeated the concerns of the Lords and LERU.

Since none of the problems with U-Multirank are in any way unique, it is difficult to avoid the conclusion that higher education in the UK is turning into a cartel and is extremely sensitive to anything that might undermine its market dominance.

And what about THE?

What is remarkable about the controversies over QS and U-Multirank is that Times Higher Education and Thomson Reuters, its data provider, have been given a free pass by the British and international higher education establishments.

Imagine what would happen if QS had informed the world that, in the academic reputation survey, its flagship indicator, the top position was jointly held by Rice University and the Moscow State Engineering Physics Institute (MEPhI)! And that QS argued this was because these institutions were highly focused, that they had achieved their positions because they had outstanding reputations in their areas of expertise and that QS saw no reason to apologise for uncovering pockets of excellence.

Yet THE has put Rice and MEPhI at the top of its flagship indicator, field- and year- normalised citations, given very high scores to Tokyo Metropolitan University and Royal Holloway London among others, and this has passed unremarked by the experts and authorities of university ranking.

For example, a recent comprehensive survey of international rankings by Andrejs Rauhvargers for the European University Association describes the results of the THE reputation survey as “arguably strange” and “surprising”, but it says nothing about the results of the citation indicator, which ought to be much more surprising.

Let us just look at how MEPhI got to be joint top university in the world for research influence, despite its lack of research in anything but physics and related fields. It did so because one of its academics was a contributor to two multi-cited reviews of particle physics. This is a flagrant case of the privileging of the citation practices of one discipline which Thomson Reuters andTHE supposedly considered to be unacceptable. The strange thing is that these anomalies could easily have been avoided by a few simple procedures which, in some cases, have been used by other ranking or rating organisations.

They could have used fractionalised counting, for example, the default option in the Leiden ranking, so that MEPhI would get 1/119th credit for its 1/119th contribution to the Review of Particle Physics for 2010. They could have excluded narrowly specialised institutions. They could have normalised for five or six subject areas, which is what Leiden University and Scimagodo. They could have used several indicators for research influence drawn from the Leiden menu.

There are other things they could do that would not have had much effect, if any, on last year’s rankings, but that might pre-empt problems this year and later on. One is to stop counting self-citations, a step already taken by QS. This would have prevented Alexandria University getting into the world’s top 200 in 2010 and it might prevent a similar problem next year.

Another sensible precaution would be to count only one affiliation per author. This would prevent universities benefitting from signing up part-time faculty in strategic fields. Something else they should think about is the regional adjustment for the citations indicator, which has the effect of giving universities a boost just for being in a low-achieving county.

To suggest that two universities in different countries with the same score for citations are equally excellent – when, in fact, one of them has merely benefitted from being in a country with a poor research profile – is very misleading. It is in effect conceding, asJohn Stuart Mill said of a mediocre contemporary, that its eminence is “due to the flatness of the surrounding landscape”.

Finally, if THE and Thomson Reuters are not going to change anything else, at the very least they could call their indicator a measure of research quality instead of research influence. Why should THE and Thomson Reuters have not taken such obvious steps to avoid such implausible results?

Probably it is because of a reluctance to deviate from their InCites system, which evaluates individual researchers.

THE and Thomson Reuters may be lucky this year. There will be only two particle physics reviews to count instead of three so it is likely that some of the places with inflated citation scores will sink down a little bit.

But in 2014 and succeeding years, unless there is a change in methodology, the citations indicator could look very interesting and very embarrassing. There will be another edition of the Review of Particle Physics, with its massive citations for its 100-plus contributors, and there will be several massively cited multi-authored papers on dark matter and the Higgs Boson to skew the citations indicator.

It seems likely that the arguments about global university rankings will continue and that they will get more and more heated.