How can we measure the quality of research without quantity data?

The lack of an intensity measure could dent confidence in the outcome of the research assessment exercise, says Brian Cantor

December 11, 2008

When so much rests on the outcome of a process, there should be no great surprise if it provokes debate.

The research assessment exercise (RAE) has proved itself over many years to be important and valuable. It has provided an accepted measure of quality, shown value for money and proved to be a useful tool in promoting UK universities and their research around the world.

However, determining "quality" is not straightforward. The recent decision not to publish the proportion of eligible staff entered for assessment at each university raises questions about how observers will interpret and use RAE data on research quality in universities.

This decision could dent confidence in the outcome, to the detriment of all concerned. It is the collective interests of the higher education sector that matter here, for this is a sector that has performed exceptionally well in the past two decades, responding effectively to the accumulating agendas of accountability, impact, widening participation and knowledge transfer.

All this has happened with the skilful support of the four funding councils, who have successfully provided an independent voice for the sector.

As a research-intensive university, York has a significant stake in the results of the RAE. I am confident that our reputation for strong research will be reinforced, wherever and however scores are reported. My concern here is not for the interests of one university, but that loss of confidence in the RAE could damage our collective interests.

The RAE is a measure of research performance, a mechanism for the distribution of quality research (QR) funding and a key determinant of a university's reputation.

While the last of these may not be expressed as a formal purpose of the RAE, it is the reality. The influence of the RAE extends beyond league tables to decisions by research councils, overseas governments and potential students.

It is critically important for the sector, therefore, that all stages of the process have unquestionable integrity. The best guarantees of that are transparency, clarity and collective agreement among stakeholders. All three now appear to be at risk after the decision not to publish the proportion of eligible staff entered for assessment at each university.

Because questions have been raised over how the eligibility of staff should be defined, the relevant data will not now be published. Without an intensity measure, though, there is no way to assess the reality of research quality.

The raw scores achieved by departments are meaningless if there is no indication of what proportion of staff was actually entered. Volume and intensity measures should be absolutely clear, and if they are not made available as part of the RAE itself, proxies will be found that may be less reliable and accurate.

The funding councils recognised the importance of the intensity measure by making it a central feature of the last RAE in 2001. Its absence this time will lead to discontinuities and potential misjudgments on the way in which research strengths have changed and developed in the intervening years.

This has serious implications for academics as individuals and for equality of opportunity. A university that strives to enter the highest proportion of staff demonstrates confidence in their performance, and a commitment to developing careers.

The interests of other stakeholders must also be safeguarded if the higher education sector is to be considered a responsible partner.

Universities have a duty to provide dependable information by which their performance can be judged objectively. The higher education sector cannot expect others to respect the integrity of the RAE if it becomes the subject of significant disagreement within the sector.

Does this debate over the eligibility criteria leave the sector with no alternative to the current position? In reality, there is little difference in the competing definitions of eligibility. The wiser decision would be to publish all the data, with appropriate caveats.

The RAE results this month should be a definitive judgment on the research strength of British universities. It would be sad if that were now threatened by lack of clarity and detail on what contributes to high- quality research.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored