BRICS & Emerging Economies Rankings 2014
The Times Higher Education BRICS & Emerging Economies Rankings 2014 powered by Thomson Reuters includes only institutions in countries classified as “emerging economies” by FTSE, including the “BRICS” nations of Brazil, Russia, India, China and South Africa.
The top universities ranking uses the same methodology as the Times Higher Education World University Rankings, covering all core missions of a world-class university - teaching, research, knowledge transfer and international outlook – using 13 carefully calibrated performance indicators.
Coming on strong
China leads the pack of developing nations aiming for academic and economic growth. Phil Baty writes
China has emerged as the strongest nation in the inaugural Times Higher Education BRICS & Emerging Economies Rankings.
Its universities take first and second place in the table, which evaluates the leading research universities from the 22 countries defined by the FTSE Group as emerging economies. In all, China takes four of the top 10 places, 15 of the top 50 and 23 of the top 100, making it the best represented and highest scoring country on the list.
Peking University claims the top spot, followed by Tsinghua University (second), the University of Science and Technology of China (sixth) and Fudan University (eighth).
“China has a long history of prioritising education,” says Weifang Min, executive president of the Chinese Society for Education Development Strategies and former chairman of Peking’s council. Pedagogy was at the nation’s heart in ancient times, he says, and was key to its drive in the 1990s to “reinvigorate the country” through science and technology.
“There was a realisation of the importance of universities in cultivating talent and for knowledge innovation, which is the engine of development in the new world economy,” he says.
This same notion has now taken hold across the developing world.
Close behind China in the rankings is its neighbour Taiwan, which has 21 players in the top 100. India has 10 representatives, Turkey seven, while South Africa and Thailand have five each.
Brazil has only four representatives in the list and its fellow BRICS member, Russia, has just two – fewer than its former Eastern Bloc allies Poland (four), the Czech Republic and Hungary (three each).
The list of emerging countries considered for analysis was taken from the FTSE Group’s Country Classification, published in September. It categorises 22 nations as “advanced emerging” or “secondary emerging”.
The other countries that make up the emerging economies table are Egypt (three representatives), Chile (two), Malaysia (two), Mexico (two), the United Arab Emirates (two), plus Colombia and Morocco (one each).
Although they were included in the analysis, no institutions from Pakistan, the Philippines, Peru or Indonesia (all classified as “secondary emerging” economies) made the cut.
So beyond an ancient and deep-seated passion for education, what specifically has helped to secure China’s pre-eminence in the rankings? Success has been driven by a clear and rigorous policy to build an elite cadre of “world-class universities” to compete with the best, explains Min.
In 1995, China’s Ministry of Education launched Project 211, an initiative to improve the research standards of some 100 universities and thus to make them fit for the 21st century (hence the name). It distributed about $2.2 billion in funding between 1996 and 2000. But that was just the start.
In May 1998, Project 985 was launched. Backed by what Min describes as “huge amounts of extra funding” amounting to billions of yuan, nine national universities (which formed the C9 League) were picked out for special support (although the list of the chosen few was later expanded). As a result, international communications, exchanges and collaborations were stepped up; university management training was internationalised; and an aggressive programme of recruitment targeting leading scholars from around the world was instituted.
The Yangtze River Scholars and the Thousand Talents programmes, for example, introduced financial incentives to attract academics to the country, in particular the haigui (“sea turtles”, a homonym for “returnee” in Mandarin) – Chinese citizens pursuing academic careers abroad.
But according to Min, China has work to do before it builds truly world-class universities that can compete against the best of the developed world.
“First, at the sector level, China needs to further deepen structural and system reforms,” he says. Partly the legacy of a centrally planned economy, “the current operating mechanism of Chinese higher education still does not fit very well with the new market economy and constrains the vitality of universities”.
He adds: “There should be more autonomy for higher education institutions. Currently there is still a little bit too much governmental interference with their operation.
“There should be a more active academic atmosphere and more academic freedom for faculty members, encouraging critical and creative thinking and intellectual independence so as to make Chinese universities more innovative.”
Finally, Min says, student recruitment is in dire need of reform.
“The current system has made young people into machines who can do the entrance examinations well, but [they are] not all-round, developed, creative individuals,” he warns.
As China advances, the economies of the remaining original BRIC quartet report mixed fortunes.
Most of India’s representatives are bunched in the centre of the table, although its number one, Panjab University, takes joint 13th position, followed by the Indian Institute of Technology, Kharagpur (30th). Overall, six of its 10 representatives are IITs (specialist institutions with burgeoning reputations), but three more traditional universities take its final three places: Jadavpur University (joint 47th); Aligarh Muslim University (50th); and Jawaharlal Nehru University (57th).
Brazil only just misses out on a top 10 spot, as the University of São Paulo turns up in 11th. It is joined by the State University of Campinas (24th), the Federal University of Rio de Janeiro (joint 60th) and São Paulo State University (Unesp) (87th).
Russia has the fewest number of top 100 institutions among the BRIC economies, but its number one, Lomonosov Moscow State University, does make the top 10 (St Petersburg State University, its other player, takes 67th position). This modest representation is partially because some of its best institutions are too small and specialised to be considered for the institution-wide rankings.
When it emerged that three Russian institutions had made the top 100 of THE’s subject-specific physical sciences table (part of the World University Rankings results published in October), Alexander Povalko, Russia’s deputy minister of education and science, said in a statement: “This is very good news for us. Our fundamental research in the physical sciences has always been and remains one of the strongest areas.”
But he added that Russia had wider ambitions for its universities.
“This year we have launched a programme to increase the positions of Russian universities in international rankings. We plan to attract foreign students and the leading representatives of the scientific and educational industries, to increase academic mobility and to strengthen international cooperation with foreign universities,” the minister said.
Some 15 institutions have been earmarked for special support under the programme.
Russia’s neighbours Poland, the Czech Republic and Hungary are well represented in the emerging economies table.
Poland takes four top 100 places, led by the University of Warsaw (23rd). The Czech Republic has three representatives, led by Charles University in Prague (31st), a performance matched by Hungary (with the University of Debrecen and Semmelweis University tied in 60th place).
Marek Kwiek, director of the Center for Public Policy Studies and Unesco chair in institutional research and higher education policy at Poland’s Poznan´ University, says that Central and Eastern European institutions have major advantages over their rivals in developing countries outside the Continent.
“In terms of university systems, Poland and the Czech Republic have had universities since the 13th and 14th centuries, when the likes of Brazil were not even colonies yet. Both Czech and Polish universities are still deeply Humboldtian, which is unheard of in non-European developing countries,” he says.
The move from elite to mass participation in much of Eastern Europe (Poland’s higher education participation rate rose from below 10 per cent in 1989 to 51 per cent in 2005) has been achieved primarily via state-funded institutions, Kwiek points out, while outside Europe, “massification has been mostly based on private higher education growth, with fee-based and lower-end institutions”.
He says that in Poland and the Czech Republic, there has been a move towards more competitive research funding through new national research councils – Poland even has a funding-linked research assessment mechanism called “parametrisation”. These are creating “a new geography of research production”, he adds.
“The new stratification of research institutions emerges, in which the best are getting more and more funding and the worst are getting close to nothing. Competition is high,” Kwiek says.
“Central and Eastern European universities have also a huge advantage by being close (geographically and culturally) to Western European universities. The role of European research and structural funds (used also for universities) has been enormous. The systems are close.”
Eastern European universities’ longer-term prospects in the overall global rankings are good, he argues – as long as they are given strong funding support and continue to move away from “outmoded” communist governance models.
Such flexible and free governance is one of the secrets of Turkey’s success in the rankings, says Umran Inan, president of 20th-placed Koç University.
Turkey is one of the success stories of the inaugural rankings, with three top 10, five top 20 and seven top 100 representatives. Its number one, Bog˘aziçi University, is in fifth place, followed by Istanbul Technical University (seventh) and the Middle East Technical University (ninth).
Inan spent 30 years as a faculty member at Stanford University and for him, pioneering US-style institutions in Turkey has been crucial to its development: this has created freedom and flexibility, diversity of mission and (perhaps most importantly) competition, he says.
The first of these pioneer institutions was Middle East Technical, founded in 1956.
“It was completely new and had a truly visionary president, who set it up in the form of the best US universities, completely contrary to the rest of the sector at the time,” says Inan, himself an alumnus of Middle East Technical.
The next “revolutionary step” was the establishment in the 1980s of Bilkent University, Turkey’s first non-state, private not-for-profit institution, run by an independent foundation.
Bilkent was followed in 1993 by Koç University, the country’s second private university.
“After that, the floodgates were open and many non-state universities came into existence; now there are 65 to 70,” Inan says.
“I think one of the secrets of the American system was always the fact that the state and non-state universities triggered one another into excellence by creatively utilising a different set of resources under a different set of conditions,” he explains.
“I think this opportunity is the one thing that Europe (especially the likes of Greece and Spain) lacks; there is no competition for the state universities, so there is nothing that keeps them from being complacent and sliding (albeit slowly) into mediocrity.”
This is something that Turkey’s academy cannot be accused of, he says.
“The Turkish scene is crazily competitive. Out of the dust of competition between 160 universities will emerge the truly excellent ones, just like what happened at the turn of the 1900s in the US,” Inan argues.
Thailand has five top 100 players: King Mongkut’s University of Technology, Thonburi (29th), Mahidol University (52nd), Chiang Mai University (82nd), Chulalongkorn University (85th) and Prince of Songkla University (89th).
Sakarindr Bhumiratana, president of King Mongkut, says that universities are central to the government’s drive (in a country where about half the population still lives in rural areas) to spring Thailand from the “middle-income trap”.
While Thailand lacks a longer-term strategy for higher education and needs greater political stability and vision, he argues, there have been significant improvements to the country’s human resources and scientific development over the past two decades.
The biggest driver of “talent development”, he says, is the extensive range of generous government scholarships for overseas study. Under the Ministry of Science and Technology (“Most”) scholarship scheme, more than 3,000 Thai students have taken PhDs in universities all over the world. The Development and Promotion of Science and Technology Talents Project, run by the Institute for the Promotion of Teaching Science and Technology, has also provided almost 1,000 doctoral scholarships.
The creation of the National Science and Technology Development Agency, the Thailand Research Fund and the National Innovation Agency has helped to improve the system via targeted funding programmes, he says.
“In addition, over the past half-decade, several large private businesses have embarked on big investments in research and development,” says Bhumiratana. “This drives research collaborations and important career pathways for researchers.
“Thai universities are expected to play an important role in the country’s development.”
This central role for world-class universities in driving economic advancement, recognised by governments throughout the developing world, is likely to push emerging economies’ institutions up the global ladder in the years to come.
Phil Baty is editor of Times Higher Education Rankings
Firm foundations: New rankings, same method
Underpinning the Times Higher Education BRICS & Emerging Economies Rankings 2014 is a sophisticated exercise in information-gathering and analysis: here we detail the criteria used to assess the global academy's greatest universities
Times Higher Education's rankings are the only global university performance tables to judge research-led universities across all their core missions - teaching, research, knowledge transfer and international outlook.
We employ 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, which are trusted by students, academics, university leaders, industry and governments.
The methodology for the BRICS & Emerging Economies Rankings 2014 is identical to that used in the Times Higher Education World University Rankings, offering a year-on-year comparison based on true performance rather than methodological change.
Our 13 performance indicators are grouped into five areas:
- Teaching: the learning environment (worth 30 per cent of the overall ranking score)
- Research: volume, income and reputation (worth 30 per cent)
- Citations: research influence (worth 30 per cent)
- Industry income: innovation (worth 2.5 per cent)
- International outlook: staff, students and research (worth 7.5 per cent).
Universities are excluded from if they do not teach undergraduates; if they teach only a single narrow subject; or if their research output amounted to fewer than 1,000 articles between 2007 and 2011 (200 a year).
In some exceptional cases, institutions that are below the 200-paper threshold are included if they have a particular focus on disciplines with generally low publication volumes, such as engineering or the arts and humanities.
Further exceptions to the threshold are made for the six specialist subject tables.
To calculate the overall rankings, "Z-scores" were created for all data sets except for the results of the academic reputation survey.
The calculation of Z-scores standardises the different data types on a common scale and allows fair comparisons between different types of data - essential when combining diverse information into a single ranking.
Each data point is given a score based on its distance from the mean average of the entire data set, where the scale is the standard deviation of the data set.
The Z-score is then turned into a "cumulative probability score" to arrive at the final totals.
If University X has a cumulative probability score of 98, for example, then a random institution from the same data distribution will fall below the institution 98 per cent of the time.
For the results of the reputation survey, the data are highly skewed in favour of a small number of institutions at the top of the rankings, so last year we added an exponential component to increase differentiation between institutions lower down the scale, a method we have retained for this year's tables.
Institutions provide and sign off their institutional data for use in the rankings.
On the rare occasions when a particular data point is missing - which affects only low-weighted indicators such as industrial income - we enter a low estimate between the average value of the indicators and the lowest value reported: the 25th percentile of the other indicators.
By doing this, we avoid penalising an institution too harshly with a "zero" value for data that it overlooks or does not provide, but we do not reward it for withholding them.
International outlook: People, research (7.5%)
This category looks at diversity on campus and to what degree academics collaborate with international colleagues on research projects - both signs of how global an institution is in its outlook.
The ability of a university to attract undergraduates and postgraduates from all over the planet is key to its success on the world stage: this factor is measured by the ratio of international to domestic students and is worth 2.5 per cent of the overall score.
The top universities also compete for the best faculty from around the globe. So in this category we adopt a 2.5 per cent weighting for the ratio of international to domestic staff.
In the third international indicator, we calculate the proportion of a university's total research journal publications that have at least one international co-author and reward higher volumes.
This indicator, which is also worth 2.5 per cent, is normalised to account for a university's subject mix and uses the same five-year window as the "Citations: research influence" category.
Research: Volume, income, reputation (30%)
This category is made up of three indicators. The most prominent, given a weighting of 18 per cent, looks at a university's reputation for research excellence among its peers, based on the 10,000-plus responses to our annual academic reputation survey.
This category also looks at university research income, scaled against staff numbers and normalised for purchasing-power parity.
This is a controversial indicator because it can be influenced by national policy and economic circumstances.
But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure.
This indicator is fully normalised to take account of each university's distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest- quality social science, arts and humanities research. It is given a weighting of 6 per cent.
The research environment category also includes a simple measure of research productivity - research output scaled against staff numbers.
We count the number of papers published in the academic journals indexed by Thomson Reuters per academic, scaled for a university's total size and also normalised for subject. This gives an idea of an institution's ability to get papers published in quality peer-reviewed journals.
This indicator is worth 6 per cent overall.
Citations: Research influence (30%)
Our research influence indicator is the flagship. Weighted at 30 per cent of the overall score, it is the single most influential of the 13 indicators, and looks at the role of universities in spreading new knowledge and ideas.
We examine research influence by capturing the number of times a university's published work is cited by scholars globally. This year, our data supplier Thomson Reuters examined more than 50 million citations to 6 million journal articles, published over five years. The data are drawn from the 12,000 academic journals indexed by Thomson Reuters' Web of Science database and include all indexed journals published between 2007 and 2011.
Citations to these papers made in the six years from 2007 to 2012 are also collected.
The citations help show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to push further the boundaries of our collective understanding, irrespective of discipline.
The data are fully normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.
We exclude from the rankings any institution that publishes fewer than 200 papers a year to ensure that we have enough data to make statistically valid comparisons.
Industry income: Innovation (2.5%)
A university's ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy.
This category seeks to capture such "knowledge transfer" by looking at how much research income an institution earns from industry, scaled against the number of academic staff it employs.
"Industry income: innovation" suggests the extent to which businesses are willing to pay for research and a university's ability to attract funding in the competitive commercial marketplace - useful indicators of institutional quality.
The category is worth 2.5 per cent of the overall ranking score.
Teaching: The learning environment (30%)
This category employs five separate performance indicators designed to provide a clear sense of the teaching and learning environment of each institution from both the student and the academic perspective.
The dominant indicator here uses the results of the world's largest invitation-only academic reputation survey.
Thomson Reuters carried out its latest reputation survey - a worldwide poll of experienced scholars - in spring 2013.
It examined the perceived prestige of institutions in both research and teaching. There were just over 10,000 responses, statistically representative of global higher education's geographical and subject mix.
The results of the survey with regard to teaching make up 15 per cent of the overall rankings score.
The teaching and learning category also employs a staff-to-student ratio (an institution's total student numbers) as a simple (and admittedly crude) proxy for teaching quality.
The proxy suggests that where there is a healthy ratio of students to staff, the former will get the personal attention they require from the institution's faculty.
This measure is worth 4.5 per cent of the overall ranking score.
The teaching category also examines the ratio of doctoral to bachelor's degrees awarded by each institution.
We believe that institutions with a high density of research students are more knowledge-intensive and that the presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike.
The doctorate-to-bachelor's ratio is worth 2.25 per cent of the overall ranking score.
The teaching category also uses data on the number of doctorates awarded by an institution, scaled against its size as measured by the number of academic staff it employs.
As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at developing them.
Undergraduates also tend to value working in a rich environment that includes postgraduates. This indicator is normalised to take account of a university's unique subject mix, reflecting the different volume of doctoral awards in different disciplines, and makes up 6 per cent of overall scores.
The final indicator in the category is a simple measure of institutional income scaled against academic staff numbers.
This figure, adjusted for purchasing-power parity so that all nations may compete on a level playing field, indicates the general status of an institution and gives a broad sense of the infrastructure and facilities available to students and staff. This measure is worth 2.25 per cent overall.