Cookie policy: This site uses cookies to simplify and improve your usage and experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Your privacy is important to us and our policy is to neither share nor sell your personal information to any external organisation or party; nor to use behavioural analysis for advertising to you.

Get off your high horse

Academics should back a case-study approach to impact in the REF or risk getting far worse, says Claire Donovan

For some, the old maxim that "a camel is a horse designed by committee" neatly fits the proposed blueprint for assessing research impact in the research excellence framework. However, there are compelling reasons why it must take this peculiar shape.

Five years ago, I chaired an Australian government committee seeking the optimal method to assess the broad social benefits of academic research. This informed the development of what was an equivalent of the UK's REF.

We found that standard quantitative impact measures - number of patents, spin-off companies, commercialisation income and so on - lacked robustness. They said little about the benefits of work, privileged economic private value over wider public value and had little relevance for basic research, especially in the humanities, arts and social sciences.

One day, my committee secretary phoned me with an idea for a novel metric to capture impact on policy: citations in Hansard. What about counting how often research was mentioned in parliamentary debate? Even better, he went on, whether the research was discussed favourably or not would be a measure of positive or negative policy impact. After a pause, I asked what it would mean if a positive or negative citation was made by government or opposition. The line went quiet. Thankfully, this idea was not mentioned again.

The committee concluded that a lack of robust impact measures made a metrics-only exercise untenable. Australia's previously metrics-bent chief scientist readily accepted a case-study approach.

Since the early 1990s, the research evaluation community has developed a variety of ways to gauge the broad social and economic benefits of research. State-of-the-art methods fuse case studies with robust supporting quantitative and qualitative data.

A similar case-study approach has been proposed in the consultation document on draft panel criteria and working methods for the REF. There are several reasons why this deserves support.

First, scholars in the humanities, arts and social sciences have often been the harshest critics of impact. This is understandable - impact has often been presented as an instrumental economic rationalisation of the value of research. Yet this applies largely to a metrics-only approach; case studies reveal the wider benefits of research. If we disengage with the impact agenda entirely, or reject the principle of impact case studies, a likely alternative will be "one-size-fits-all metrics" that conceal socially and culturally valuable research outcomes.

Second, some natural and social scientists advocate metrics-only impact assessment on the grounds that narratives are "fairy tales" and peer review is "subjective". But this is "impact-lite". Removing the power of narrative explanation and expert judgement will render the evaluation process superficial. "Objective" metrics often gloss over much more than they reveal.

Third, a great deal of research produces social, cultural, environmental and economic benefits that go unrecognised. Impact assessment in the form proposed illuminates this public value. It also strengthens the case for government support for the humanities, arts and social sciences on their own terms.

This is also a cautionary tale, for the Australian committee's recommendations did not come to fruition. With a change of government, a new minister looked at the case-study approach to assessing impact and saw a camel. It was replaced by four streamlined, simple metrics: plant breeders' rights, patents, registered designs and commercialisation income.

The lesson here is that the worst possible response to the consultation is to reject the case-study approach. Of course, there are caveats. The REF methodology is correct in principle but needs fine-tuning. Basic research must remain valued: one solution is for impact to be optional but eligible for rich rewards. And assessments must not stifle impact "in the making" or the efforts of younger research groups.

It is in all our best interests to respond to the REF consultation with constructive suggestions. Otherwise, get the hump and get simple metrics.

Readers' comments (2)

  • Let Claire Donovan read and reply to all of the objections to the impact agenda raised by Stefan Collini in the TLS 13th November 2009 . Her article here ignores most of the ways in which the REF has defined 'impact' Nor is it clear how her Australian experience is relevant to our present plight.

    Unsuitable or offensive? Report this comment

  • Claire has some great points here, and in her very popular blog on a similar theme on the LSE Impacts blog at But, case studies have been around a long time, without changing anything very much. And in the areas where research evaluation is more advanced, like health care, there has been increasing attention to finding well-balanced metrics of different kinds, that operate within overarching frameworks - so that cases are used less now than before. I don't know anyone who calls for external impacts to be assessed only by metrics - although that is feasible now with academic impacts and well-structured citation analysis. But equally what is the point of assembling 5,000 case studies of impacts, that no one will ever read, and for which sensible and transparent criteria of assessment have not beeen set out? Finally, Claire is still talking about the REF as a system of 'peer review'. This was always a misnomer (in the manner of "People's Democracy". Peer review and purely bureaucratic assessments were always light years apart. But even HEFCE now no longer makes ANY claim that the REF is a form of 'peer review' - this phrase does no occur once in the most recent HEFCE dicumentation. And nor could it, because all the REF panels now include non-academics who are not 'peers' but businesspeople, civil servants or quango folk. So all talk of 'peer review' is now as obsolete as the dodo. Now HEFCE promises instead only 'expert review'.

    Unsuitable or offensive? Report this comment

  • Print
  • Share
  • Save
  • Print
  • Share
  • Save