Report calls for EU support for innovation evaluation
Brussels, 30 Mar 2006
Evaluating innovation programmes and activities more effectively should be 'an integral part of achieving the Lisbon objectives', a report on monitoring and evaluating innovation programmes has concluded.
The report is the result of a study by a high level working group headed by Louis Lengrand and Associates. It was conducted at the request of the European Commission's Enterprise DG, and involved case studies of seven countries and regions, interviews with experts, and a survey of the views of other experts.
The study identified three different cultures of innovation evaluation. The most advanced cultures (Finland, Scotland, Sweden) use evaluation as a central element in the development of their innovation policies. Germany was identified as belonging to a second group, in which considerable experience of evaluation has been accumulated. But while evaluations are used as a learning tool, they are mainly limited to the analysis of programme design and delivery, with less influence on policy design.
A third group are now recognising evaluation as important, but are further behind with its development and implementation. Hungary, Spain, and the Walloon Region in Belgium are included in this group.
Evaluating the effectiveness of innovation programmes is no simple task. Many of the results are not visible until after the programme has finished, and in some cases not until many years after that. The report also refers to 'overdetermination', as another barrier to evaluation. Overdetermination 'refers in part to the 'noise' that makes it hard to unpick the outcomes of interventions from all sorts of other changes induced by ongoing political and economic cycles and events. In addition, it refers to the ways in which outcomes are influenced by the intertwining of policy initiatives,' reads the report.
The report also notes the difficulties involved in seeking to 'bolt on' evaluation after a programme has been completed. At this point it is often too late to introduce appropriate indicators. 'Designing programmes with evaluation in mind can substantially enhance the usefulness of the evaluation,' the report advises.
The case study on Finland paints a glowing picture of the country's innovation evaluation activities. TEKES, Finland's funding agency for research and innovation, plays a major role in supporting innovation and evaluating it in the country. The agency is now in its third generation of evaluation programmes. While other countries are struggling to carry out evaluation, Finland is actually attempting a radical innovation in its innovation policy.
'What matters is not to give support to companies, but to improve the innovation capacity for Finnish enterprises to enable them to complete globally. What TEKES is looking for now in evaluation is neither to focus mainly on activity level, nor only on the outcome in terms of new technology or intellectual capital of the projects. Rather, it seeks to assess the impact on the economic wealth, well-being of citizens and sustainable development,' states the report.
The study found that Finland's overall economic success is, in part, based on a sustained high level of public funding for research and innovation. Knowing where to direct the country's resources is down to its evaluation programmes. 'Thus it can be concluded that using evaluation results as a tool to design policies is - at the very least - not damaging,' states the report.
Most countries are some way behind Finland in terms of both innovation and evaluation. In Spain, it is difficult to combine an evaluation agenda, and administrative agenda and a political agenda, according to the report.
The country's research, development and innovation plan foresees the systematic evaluation of innovation activities, and resources are reserved accordingly. 'However, the traditional command and control, annual budgeting and corresponding willingness to spend accordingly, [are] still blocking initiatives to evaluate,' the high level group found.
'Either there is no evaluation at all (although it is planned) or independent bodies (such as the Cotec Foundation) perform evaluations but these are not used because they are independent from the government,' the report continues.
The report notes that Spain would benefit considerably from EU support, both in terms of financial assistance and instruction. The funding could be used to help stimulate a national market for evaluation and to import consultancy advice, the high level group recommends. It also suggests that European Commission staff could spend part of their time in countries such as Spain in order to increase the stock of relevant evaluation knowledge and skills.
Other policy recommendations include encouraging learning processes by promoting the development, maintenance and use of a database of innovation programme evaluations, encouraging dialogue, and ensuring the publication and dissemination of evaluation results.
At EU level, the group recommends: establishing a fora to discuss and implement evaluation-based innovation policy design and implementation; monitoring and documenting the changing nature of evaluation; finding evidence of good practice; and supporting the development of new evaluators at Community level through training and networking activities.
In addition to the final report, the study also resulted in a practical guide to evaluating research programmes and a pilot initiative. The guide, known as SMART INNOVATION, is based on a question and answer framework, and makes the case for evaluation in a way that, the group believes, can be used to build a culture of evaluation. The pilot initiative will deliver experience and general guidance, and will deliver terms of reference for commissioning evaluation.