Revamped US research PhD rankings cause bewilderment

A long-awaited analysis of the state of US research programmes has resulted in controversy after it arrived three years late and produced a novel form of ranking.

九月 29, 2010

The National Research Council’s Database Assessment of Research Doctoral Programmes – the first to be published since 1995 – includes data on more than 5,000 programmes at 212 universities across the US.

It was scheduled to be published in 2007, and its late arrival means that the data used are now almost five years old.

Because of the intervening turnover of faculty and changes in funding, some are questioning how relevant the findings are, especially given the impact of the recession, which the data pre-date.

An extra question mark has been raised over the new methodology, which gives doctoral programmes a ranking within a range of positions rather than a specific numerical value.

For example, the English language and literature programme at Arizona State University could be placed anywhere between fifth and 29th position out of 122.

More broadly, at the University of Michigan, the percentage of PhD programmes in the top 25 per cent nationally could be as high as 95 per cent but could also be as low 20 per cent, according to the assessment.

The 2010 exercise also sees programmes being put in a range in two categories, known as the “S-rankings” and the “R-rankings”.

The S-rankings are based on a series of statistical analyses of characteristics of programme quality. The council surveyed academics to determine which characteristics were important to them and then used a statistical technique called “random halves” to determine the ranking range. This involved half of the responses being chosen at random and statistically compared to the characteristics of individual programmes to come up with a ranking. This process was then repeated 500 times, and the scores at the top and bottom of the spectrum were used to create a final ranking range.

The second group of rankings, the R-rankings, were based indirectly on a reputation survey undertaken by staff. Using the same random halves method, survey responses were compared to characteristics of programmes in a certain discipline.

If, for instance, faculty rated programmes in a certain field at Harvard University highly, other universities’ programmes that had characteristics similar to Harvard’s were also rated highly.

The confusing approach, which has left universities unable to compare their positions in these rankings to those of the 1995 assessment, has proved controversial.

The study committee responsible for compiling the rankings has not endorsed it, and at the press briefing some committee members even went as far as to suggest that there may be better ways to rank than this exercise, which cost the sector $4 million (£2.5 million).

sarah.cunnane@tsleducation.com

Key findings from the report

The number of students enrolled in doctoral programmes in the US has increased by 4 per cent in engineering and by 9 per cent in the physical sciences since the previous exercise in 1995, but has declined by 5 per cent in the social sciences and by 12 per cent in the humanities.

On average, all fields have seen a growth in the percentage of female students. The smallest growth, 3.4 per cent, was in the humanities, which had a large percentage of female students already; the greatest growth was in engineering, where female enrolment increased to 22 per cent overall.

The percentage of PhDs awarded to students from under-represented minority groups has increased in all fields. For example, the proportion increased from 5.2 per cent to 10.1 per cent in engineering, and from 5 per cent to 14.4 per cent in the social sciences.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.