gears turning gears

Assessment of research quality for funding decisions

This time we are looking at assessment of research quality for funding decisions and how that is done (the nitty gritty of some value judgements that appear to be made in quantitatively based evaluations).

How are research evaluations for funding decisions made?

Broadly speaking, three main kinds of evaluations are used to assess research performances in higher education institutions: those based on quantitative metrics (e.g. journal ‘quality’, citation numbers, external research income), ‘expert peer’ assessment of research activities, or a mixture of the two.

What might be the pitfalls of an apparently rigorous assessment framework?

As an example, for UK university research funding provision, the UK’s Research Excellence Framework (REF) is used. Working from the REF 2014 data, Mehmet Pinar and Emre Unlu investigated the characteristics of highly scoring research environments, one of the features of the REF. Both Professor Pinar and Dr Unlu work at Edge Hill University’s Business School. What concerned them was the lack of an agreed weighting for the specific quantitatively based factors in the guidelines for peer reviewers of the research environment.

As evaluators considered the importance of such factors differently, the peer evaluation was inconsistent. Findings include a tendency for organisational units with high external research income generation to be thought of as better research environments in almost all assessed subject areas. Numbers of postgraduate completions per fulltime equivalent staff members was an influential factor in some assessments but not so for over twice as many. (The writers caution that numbers of completions in themselves are no guarantee of these postgraduates’ research quality.) There is a skewing towards larger institutions being graded more highly and institutions which have an evaluation panel member for a particular discipline also tended to be scored favourably for that discipline. A number of other insights are available in the full article.

What are some big picture messages for research evaluation approaches?

Intriguingly, the authors also mention that an assessment of the ‘novelty’ and ‘significance’ of research activity might not be understood through bibliometric information such as numbers of citations. Often innovative research appears first in so-called ‘low impact factor journals’, but hopefully it will be recognised by peer review evaluation, if the word gets out and all goes well. Eleonora Dagiene gives a very straightforward analysis of some of the problems with how academic books’ quality is currently assessed. The Leiden Manifesto gives some resoundingly well argued points on interpreting quantitative information about research quality in a more holistic context.

Also from the Netherlands, the recently announced Strategy Evaluation Protocol has emerged. A highlight is the idea that a publisher’s ranking would be partially based on whether it is ‘regarded as very important for communication between researchers.’ As digital possibilities for communication are ever evolving, this offers food for thought and for collaboration channels.