REF Pilot: humanities impact is evident and can be measured

Judy Simons declares that the cultural and societal benefits of the arts are transformative, calculable and must be advertised

November 11, 2010

Source: James Fryer

“Impact” is a slippery slope of a concept, especially for the arts and humanities. It will vulgarise our research, protest high-minded academics, and turn first-rate universities into second-rate companies. How can you measure the impact of an activity whose worth is not only self-evident but too rich and too nebulous for functionalist metrics? While it may be justifiable to scrutinise the return on investment in science, technology, engineering and mathematics research, the humanities, so different in kind and output, cannot possibly be subjected to the same process-driven methodology.

Well, up to a point, Lord Copper.

The research excellence framework impact pilot exercise, just completed, set out to answer the question being asked, ever more insistently, by the Treasury: “If public funding is poured into university research, what does the public get for its money?”

Five disciplines, including English, were invited to test a methodology to identify and rank the impact of research outside the academy.

From the start, it was clear that special pleading for English would only sound the death knell for the humanities. They are the poor relation of the research budget anyway. For goodness sake, let’s protect what we have, even if it’s only crumbs from the rich man’s table. For if we believe in the value of what we are doing - and I have yet to meet a humanities scholar who does not - then surely we should be able to defend it with the articulacy and verve that is our trademark.

The pilot panel combined academics from the 2008 research assessment exercise’s English panel with representatives from “user” organisations, including the BBC, the British Library, public-examination bodies, publishers, journalism, theatre and PR firms, plus the Arts and Humanities Research Council.

The first challenge came in grappling with the terminology we had been allocated. We argued that “impact” should cover broad cultural and societal benefits as well as the commercial and economic effects of research. Indeed, once our discussions started using the term “benefit” instead of impact, the exercise became much more meaningful.

We also felt it vital to draw up and work with a set of criteria that went beyond the limited indicators in the Higher Education Funding Council for England guidance document, which had clearly baffled colleagues from participating institutions. These criteria are published in an annex to the main pilot report. A pilot is, after all, a chance to refine an imperfect methodology.

While many panel members initially shared the prevalent scepticism about the viability of the exercise, we were collectively won over by the strength of the findings. Universities submitted a series of case studies, using the ratio of one per 10 members of staff.

Far from finding it difficult to assess the benefits of humanities research beyond academia, we saw compelling evidence of partnerships with public cultural institutions, theatre companies, museums and galleries. We found cases where online archival materials had both created and strengthened the storehouse of cultural memory. Literacy research was shown to influence national policy, and the development of corpora for English-language teaching had clearly had huge impact domestically and internationally.

Traditional literary research by an individual could score as highly as project research by a team. Indeed, we would have welcomed more examples relating to books, such as a major biography (the product of years of painstaking research). The UK publishing industry is worth £3 billion a year and generates vast international sales, yet many departments tended to avoid the intellectual heart of the subject in favour of more peripheral examples.

Of course, some institutions were misled into trying to conform to an imposed bureaucratic agenda and so did not choose their best cases. Others relied on the celebrity factor of certain academic names or wilfully misinterpreted the task, perhaps in an attempt to undermine its credibility.

The most common failing was to equate engagement with impact. As Dorothy Parker said when asked for a sentence using “horticulture”: “You can lead a horticulture but you can’t make her think.” Having a dissemination strategy or counting heads at a public lecture are not, per se, evidence of impact.

The case studies presented the effects of research over a 15-year period, long before there was any suggestion of an “impact agenda”. A powerful story emerges about the strength and benefits of research in the humanities, research that transforms the intellectual and cultural landscape, generates commercial capital and sustains citizenship and civil society.

Who will tell it if universities don’t?

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored