RAE's non-specialist gambit could have led to blunders, says study

Non-expert judges may not have been up to the task, chess research claims. Zoe Corbyn reports

May 7, 2009

A study into the behaviour of expert chess players has led a team of researchers to make a controversial hypothesis - that those who judged the quality of work submitted to the 2008 research assessment exercise were not qualified for the task.

The academics contacted Times Higher Education last week to report the results of their study into skill levels and behaviour among chess experts.

Their study, published in the journal Cognitive Science, looked at how 24 top chess players performed when they were forced to begin games with unfamiliar moves, rather than the openings they specialise in.

After starting in this way, the performance of the chess players was markedly reduced, as measured by the Elo rating scale used as the international yardstick for chess performance. Players normally rated as grandmasters dropped more than 2,600 places in the rankings, exhibiting performance closer to that of international masters.

But the cognitive psychologists said that the findings from the paper, "Specialisation effect and its influence on memory and problem-solving in expert chess players", do not simply help to explain the mechanisms of learning and expertise. They also have ramifications for the RAE and its replacement, the research excellence framework.

The RAE saw academic peer reviewers judge the quality of research papers outside their specialist areas. Just like the chess players, who floundered outside their comfort zones, the RAE reviewers are also likely to have struggled, the researchers claim.

"(Our research) sheds some worrying light on (the) assumption ... that experts in the panel of each discipline were able to judge the quality of papers when these were outside their domain of specialisation," the researchers said.

"If the results of our experiments generalise to the type of evaluation done in the RAE, then there is room for doubt that the assessors were properly qualified for their task."

The team comprises: Fernand Gobet, professor of cognitive psychology at Brunel University; Peter McLeod of the department of experimental psychology at the University of Oxford; and Merim Bilali? of the department of neuroradiology at Tubingen University in Germany.

The researchers acknowledged that there are obvious differences between evaluating the quality of research papers and finding the best chess move, but added that it is "doubtful" whether the effect of non-specialisation would disappear.

In fact, it could be more pronounced because there are greater differences in specialisation in the academy than among chess players, they said.

Having research papers evaluated by two experts, as was done in the RAE, might "reduce" the effect, but there would be no guarantees if both academics came from outside the domain under scrutiny, they added.

Professor Gobet emphasised that the extrapolation of the research findings to the RAE was not meant to be a joke.

"The implications are very serious for the entire approach," he said.

• The Higher Education Funding Council for England has published the content of more than 2,300 submissions to the 2008 RAE.

Anyone can find out exactly which research papers by which academics in every department were submitted for assessment.

Also available is information about each department's research income, and how they described their environment and "esteem" to RAE reviewers.

www.rae.ac.uk

zoe.corbyn@tsleducation.com.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored