World Reputation Rankings 2012 methodology
Thousands worldwide have responded to our Academic Reputation Survey, whose rigorous methodology addresses common concerns and shows what scholars really think
Each year, tens of thousands of academics from all over the world receive an important email.
It is an invitation from Times Higher Education and Thomson Reuters to take part in the annual Academic Reputation Survey, carried out by Ipsos.
"You have been statistically selected to complete this survey and will represent thousands of your peers," it says. "The scholarly community, university administrators, and students worldwide depend on the survey results - they provide the most reliable access to the voice of scholars like you."
The survey, available in nine languages, uses United Nations data to ensure that it is properly distributed to reflect the demographics of world scholarship.
Those who give their time and expertise to complete the survey do so with no more incentive than the opportunity to see a summary of the results, plus a few free electronic copies of Times Higher Education magazine: there is no prize draw, no gimmick, and the survey does not allow volunteers or nominations. It simply gathers academics' opinions on the quality of research and teaching in institutions within their disciplines and with which they are familiar.
Despite these strict rules to ensure rigour, the engagement from the global scholarly community has been extraordinary. In the first exercise, carried out in March-April 2010, 13,388 responses were received.
In the second round, conducted in April-May 2011, none of the original respondents was asked to take part - yet 17,554 of those contacted replied, a 31 per cent increase on the response rate in the first year.
In fewer than four months spread over two years, just under 31,000 academics from 149 countries engaged with the exercise.
The respondents overwhelmingly have been experienced, senior academics. Almost three-quarters have identified themselves as academic staff, with the majority working full-time. The average respondent has worked at a higher education institution for 16 years.
There is a balanced spread across disciplines: about 20 per cent of respondents hail from the physical sciences, a figure matched by engineering and technology, with 19 per cent from the social sciences, 17 per cent from clinical subjects, 16 per cent from the life sciences and 7 per cent from the arts and humanities.
In terms of geographical spread, some 44 per cent of respondents in 2011 reside in the Americas, 28 per cent in Europe, 25 per cent in Asia Pacific and the Middle East, and 4 per cent in Africa (these numbers have been rounded).
The use of reputation surveys in university rankings has long been controversial. Famously, one of the most powerful criticisms was made by writer Malcolm Gladwell in a February 2011 article in The New Yorker, "The Order of Things: what college rankings really tell us".
In a popular domestic US ranking, college presidents were asked to grade every school in their category on a scale of one to five, with some asked to rate up to 261 institutions.
Gladwell wrote that it is "far from clear how one individual could have insight into that many institutions" and argued that such exercises revealed nothing but "prejudices".
But Michael Bastedo, an educational sociologist at the University of Michigan who has studied reputational indicators in university rankings, was quoted by Gladwell as saying that such surveys can work - for example, when academics in a particular discipline are asked to rate others in their field.
Such respondents "read one another's work, attend the same conferences, and hire one another's graduate students, so they have real knowledge on which to base an opinion", Bastedo said.
This is the approach taken for the Times Higher Education World Reputation Rankings, and it is one that Bastedo has recommended to other rankers.
In the Academic Reputation Survey used by THE, scholars are questioned at the level of their specific subject discipline. They are not asked to create a ranking or requested to list a large range of institutions, but to name just a handful of those that they believe to be the best, based on their own experience (no more than 15 universities from a list of more than 6,000).
To help elicit more meaningful responses, respondents are asked "action-based" questions, such as: "Which university would you send your most talented graduates to for the best postgraduate supervision?"
The survey data were used alongside 11 objective indicators to help create the 2011-12 World University Rankings, published last October. They now stand alone, for transparency's sake.
Calculating the scores
The reputation table ranks institutions according to an overall measure of their esteem that combines data on their reputation for research and teaching.
The two scores are combined at a ratio of 2:1, giving more weight to research because feedback from our expert advisers suggests that there is greater confidence in respondents' ability to make accurate judgements regarding research quality.
The scores are based on the number of times an institution is cited by respondents as being "the best" in their field of expertise. Each respondent was able to nominate a maximum of 15 institutions. The number one institution, Harvard University, was selected most often.
The scores of all other institutions in the table are expressed as a percentage of Harvard's score, set at 100. For example, the University of Oxford received 71.2 per cent of the number of nominations that Harvard received, giving it a score of 71.2 against Harvard's 100. This scoring system is different from the one used in the World University Rankings and is intended to provide a clearer and more meaningful perspective on the reputation data in isolation.
The top 100 universities by reputation are listed, but Times Higher Education has agreed with data supplier Thomson Reuters to rank only the top 50 because the differentials between institutions after the top 50 become very narrow. The second group of 50 institutions are listed in groups of 10, in alphabetical order. Scores are given to one decimal place, but were calculated to a higher precision.
You can learn more about the Thomson Reuters Global Institutional Profiles project, the data source for the rankings, here: http://ip-science.thomsonreuters.com/globalprofilesproject/