Choose your yardstick: does anyone know what the RAE really means?

Jonathan Adams says that even close scrutiny of the intriguing 2008 data gives no firm answers about who's better and who's best

January 1, 2009

Dear vice-chancellor of (insert institution name), Just to confirm that Evidence's RAE 2008 data application "We Can Do What We Want With Numbers Like These" shows that you are one of the 159 institutions in the UK's top five for research."

It sounds like Laurie Taylor, but it's true. The tumult and the shouting of the first exciting days of the research assessment exercise results die away. The captains and the kings, and the vice-chancellors, depart to the Caribbean for New Year. And we are left to reflect on what the numbers actually mean.

One thing is for certain though: if you think you know, then you're wrong - but don't stop trying.

Evidence spent a lot of time working through the detailed RAE profile data and we're still not sure what the interpretation should be. The initial tables produced here in Times Higher Education and in other publications show a diversity of institutional positions. Some looked intriguing, and a few seemed rather odd. There was a feeling of familiarity yet dissonance.

Research at the universities of Nottingham and Plymouth was well recognised on most permutations of the data, and these institutions, and others, would do well on any sensible index. But there were also questions. On some analyses, institutions such as the University of Edinburgh and King's College London dropped markedly - yet other data we have to hand suggested that they were firmly anchored near the top. And why did the University of Leeds home page express contentment that it ranks 15th when our analysis says it's ninth? Clearly not all is as it seems.

Dispel any concerns that there might be problems with the data or with the RAE methodology used to arrive at the outcomes or their presentation. Our experience suggests that this is a very good way of looking at the quality of research in institutions.

We had all got used to the RAE's simple but simplistic index of grades. Easy to understand, we thought; a good measure of where we all stood and a good badge to stick on the door. But what was a "grade four" department? It was a collection of people, some doing work of international quality, some leading key UK groups, and some outstanding teachers who had done good work in the past.

The departmental mix was poorly captured in that single number: all good, or all bad, and an impact on cliff-edge funding pots with very nasty drops.

Sir Gareth Roberts' methodology, instituted for the 2008 RAE, changed all that. Now we were able to see how much of the mix was national, international or truly world-leading in its subject area. This is much better information, but it evidently doesn't translate quite so easily into simple answers for stakeholders.

Simple averages simply don't provide enough information and can be misleading. Research performance tends to be skewed: lots of low-value points and a few very high values. Many staff publish rather few papers while a few publish an avalanche. Similarly, citations of those publications can vary wildly. Our indices of "average" - be it for funding, RAE grades or citations per publication - are nowhere near the middle of the distribution.

What is the best department in the country, and the worst? Without grades to band them, you can rank the whole lot if you want. So, well done for museum studies at the University of Leicester. Nationally, communication, cultural and media studies, drama, dance and performing arts and music capture ten of the top 17 slots on a weighted grade-point average. Allied health professions and studies, meanwhile, has to make do with four of the bottom ten.

Anthropology, history of art, music and drama all have more than 20 per cent of their output at 4*. Education, psychology and agriculture are under 10 per cent. Biology, chemistry, physics and mechanical engineering are in the low teens.

But I am not sure I really believe that. I don't know a lot about art, but I know what I like - and what I don't like are results that show this kind of disparity. If we look at the percentages for 4*, then the big sciences have a spread with the odd unit as high as 35 or 40 per cent, but in the arts we get values as high as 65 per cent international leading. Be honest - are you really that good, or is this an alien way to think about "research performance"?

So, first, we have a complex interpretation of each profile and an interpretation that may differ from one subject to another. Then, we have an analytical challenge to index the profiles to a single number that adequately reflects what lies beneath.

Finally, there is a further challenge to weigh up those profiles across an institutional portfolio that produces anything like a sensible comparison between Imperial College London and the Royal College of Art. I'm not buying this until I have had a long think. A day may come when the courage of men fails, and we decide that a league table can sum this lot up, but it is not this day.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored