Cookie policy: This site uses cookies to simplify and improve your usage and experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Your privacy is important to us and our policy is to neither share nor sell your personal information to any external organisation or party; nor to use behavioural analysis for advertising to you.

Go for gold, silver or bronze

Let's learn from the Olympics and allow departments to pick the areas that feed into rankings, argues Richard Rose

The Government's obsession with quantitative assessment makes it easy to create league table rankings for universities. As only one institution can be first, this feeds an elitist obsession with position. However, since it funds more than 100 universities, it should be wary of promoting rankings, which in turn encourage the dismissal of most universities as "failing" to be top.

League tables can use computerised metrics that impersonally assign a score to every institution. Each score is reliable, that is, anyone applying these procedures will come up with the same answers. However, that does not make them valid. The short answer is to abandon league tables.

Whereas football coaches know what it takes to be top of the league, there is no such agreement among academics. Philosophers may value big ideas while engineering departments may value big grants. Moreover, as Thomas Kuhn argued, the real intellectual breakthroughs often come from individuals who stand outside the consensus institutionalised by peer review.

Adding up assessments based on different criteria assumes commensurability between things as different as quantitative and qualitative research and teaching high-flyers and deprived students. It ignores the statistical tendency for average scores to bunch together universities with different qualities, and differences of one or two tenths of a point produce differences of up to ten places in league standings.

Since most universities are likely to have a mixture of good, average and not-so-good departments, throwing different attributes together squashes their profiles into single numbers that lose more information than they contain.

As Ted Gurr, the creator of polimetrics, or quantitative political analysis, has stipulated: "We must name things before we count them." Instead of reducing intelligent academics to the level of fund managers, it makes more sense to question them about what they are good at. Just as students are not expected to answer all examination questions, so departments could be asked to select priorities from a list of up to a dozen criteria. After stating what they are good at, they can then be asked: "What's your evidence?" This simple question can produce revealing answers.

For example, one institution may regard presenting papers at a miscellany of conferences as evidence of scholarship, while another may cite work in internationally known journals. Given the approximate nature of much evidence, ordinal assessments on a five-point scale ranging from excellent to poor are more defensible than falsely precise double- or treble-digit scores.

If departments make sensible self-assessments, a majority would be rated as good or average at what they give priority to, and opt not to be evaluated on criteria for which they are weak.

A spidergram can show how departments shape up. Its polyhedral profile discourages the reduction of evaluations to a single hierarchical ladder. Moreover, its flexible form calls attention to the most pronounced features of a department, strong and weak.

A multiplicity of indicators can show in what ways a department is already strong and where it could do better. With this information, a university vice-chancellor or budget committee faced with competing claims could decide whether average or good is good enough, or whether investment is warranted to make a department excellent. A department that chooses only the softest criteria for evaluation can come under more detailed scrutiny.

The Olympics offer a better model than the World Cup for university rankings of the sort Times Higher Education conducts. The Olympics allow each country to choose the sports in which it competes. There is no single top prize, but many events awarding gold, silver and bronze medals. Even the country with the most medals is not regarded as good at everything. A prime minister can take pride in the gold medals that their country wins, just as v-cs could take pride in their star departments.

An academic Olympics could award medals in such diverse fields as "bums on seats", PhD hoods around necks or even Nobel prizes banked.

  • Print
  • Share
  • Save
  • Print
  • Share
  • Save
Jobs