Research intelligence - The prodigal returns to open arms

Impact makes Australian comeback after successful UK tour. Paul Jump reports

July 12, 2012
Source: Getty

Australians are well known for their boomerangs - and it seems that even elements of research policy that they throw away sometimes come back.

A policy of assessing the impact of Australian universities’ research was first proposed in the mid-2000s by the Liberal-National coalition government led by John Howard.

It was proposed that impact would account for 20 per cent of the scores in a new national research assessment programme, known as the research quality framework (RQF), along the lines of the UK’s research assessment exercise.

The government committee that developed the impact methodology was chaired by Claire Donovan, who is now a reader in assessing research impact at Brunel University.

In her view, Howard’s right-wing government wanted to boost innovation by linking academic concerns more closely to those of industry. But, she added, it also wanted to demonstrate that the humanities and the social sciences - to which it was “hostile” - were “useless and (consisted of) ivory tower conversations that don’t mean anything to the normal Australian”.

Nevertheless, the inability of any available metrics to account for the cultural impact of humanities research was one of the arguments her committee used to lobby successfully for an assessment approach based on narrative case studies.

The elite Group of Eight universities, however, remained fiercely opposed to assessment based on impact. Dr Donovan’s perception was that they were concerned about funding being diverted from high-quality research.

But according to Lyn Yates, pro vice-chancellor for research at the University of Melbourne, a Group of Eight member, the opposition stemmed more from concerns over the extra administrative burden of impact reporting.

“In fact, we were pretty confident we would come out top or near the top on impact as well,” she said.

Either way, the Group of Eight’s views were picked up and taken to heart by Kim Carr, the opposition Labor Party’s shadow minister, who tossed the impact agenda into the distance when his party came to power in 2007 and he became minister for innovation, industry, science and research.

But it was not long before impact assessment appeared in the blue skies of UK academia. A report commissioned by the Higher Education Funding Council for England in 2009 identified the RQF’s approach as the most sophisticated yet devised, and Hefce drew heavily on it during its formulation of the impact element of the research excellence framework - the first submissions to which are due next year.

Meanwhile, back Down Under, the RQF, stripped of its impact element, was renamed Excellence in Research for Australia (ERA) and was run for the first time in 2010.

But the Australian Technology Network of Universities (ATN) in particular was unimpressed, and last year announced plans to run its own pilot exercise in impact assessment.

Much to observers’ surprise, the Group of Eight then asked to join in with what became known as Excellence in Innovation for Australia (EIA). According to Dr Donovan, the key to the change of heart was the fact that the EIA would be entirely separate from the ERA - with the implication that if it were linked to university funding, it would draw from a separate funding pot. Professor Yates agreed that it was “helpful” to keep the ERA and the EIA separate “so that they are not conflated in a muddied way”.

Then, at the end of last year Mr Carr was reshuffled and his former ministry accepted the recommendation of a report it had commissioned that a feasibility study be carried out into impact assessment. The EIA, which opened for submissions last month and will close at the end of August, is expected to form the basis of that study, and its organisers are liaising with the government accordingly.

This is no coincidence

Reading through the submissions guidance for the 12 universities taking part in the EIA, it is easy to agree with Dr Donovan that Australia has essentially just got its own boomerang back. The guidelines bear at times a striking resemblance to the RQF-influenced REF protocols. This is most apparent in the case-study approach, which requires universities to link well-evidenced impact to specific pieces of research. The criteria of “reach” and “significance” against which impact will be assessed are also lifted straight from the REF.

The similarities are not coincidental. Hefce is represented on the EIA’s advisory committee and, according to Dr Donovan, the fact that the EIA is explicitly informed by the UK template will lend it an “extra veneer of credibility”.

However, there are significant differences between the UK and Australian approaches. For instance, the EIA eschews the REF’s focus on disciplines. Instead, universities are required to demonstrate impact in four categories identified by the Australian Bureau of Statistics: economy, society, environment and culture. Only five submissions are permitted in each, so not every discipline or department will have to participate.

Unlike the REF, the EIA’s assessment panels - one for each impact type - will predominantly consist of research “end users”, such as the government, industry and business. Academics will still be on hand to advise on the underlying research, but universities are expected to keep technical vocabulary to a minimum.

“We should not need academic jargon to describe the delivered benefits of research,” said Vicki Thomson, executive director of the ATN. Users were best placed to “determine the nature and value of impact”, she added.

Ian McMahon, director of research at the Group of Eight, added that the preponderance of users would “add credibility to the assessment”.

Dr Donovan agreed: “In Australia, [the popular image of] isolated, aloof academics isn’t one that is liked a great deal. There would be scepticism about academics being in charge of processes that passed judgement on them and said: ‘Give us money.’”

The EIA also lacks any quality threshold for the research underlying impact - although it is hoped that the exercise will shed light on whether there is a link between impact and research quality.

Nor will the EIA be linked to any funding formula - although, as with the ERA, this is the ultimate expectation. For Professor Yates, the “big hope” is that the “definite benefits” of research highlighted by the EIA will convince the Australian government to increase the size of the funding pot.

Ms Thomson believes that the key will be to persuade the electorate of the benefits of research - to which end, November’s final EIA report will include “a plain-language exposition of how research has delivered benefits” to Australia.

“Simply put, research needs to matter to the community for it to be a high priority for government,” she said.

paul.jump@tsleducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored