eye of storm from space

Impact factors: the elephant in the room

The theme today is journal impact factors and what they are perceived to mean.  In other words, let’s take a good close look at some rich data around journal impact factors. Some of this might be useful to kick off a discussion to inform how funding decisions are made at your institution or other places which provide funding.  Or not.

Researchers’ perceptions of impact factors

In a 2018 study by Meredith Niles and others, 338 academics from 55 institutions in Canada and the US reported some perceptions about the influence of journals’ impact factors in their choice of where to publish. While 27% stated this as very important to themselves, a considerably larger 39% considered impact factors to be very important to their peers. The question arises: why do impact factors have this mystique? Isn’t it more important that one’s research is widely disseminated to other researchers? In an ideal world, wouldn’t researchers want that to be the main criterion for selecting a means of publication?

Why do researchers publish?

As one reviewer Björn Brembs, a neuroscientist from the University of Regensburg, in Germany, commented in eLife, “This work shows just how much science is in dire need of a healthy dose of its own medicine, and yet refuses to take the pill”. The divide between men’s and women’s publication ‘outputs’ was also apparent, with women tending to be more cautious about the expense of publication and publishing far less, on average. The researchers encourage more public discussion about the value of using metrics and publication numbers when evaluating academics for advancement, tenure and grants of funding. Some formalisation of this debate has already happened via HuMetricsHSS (Humane Metrics in the Humanities and Social Sciences) and DORA (Declaration on Research Assessment).  I have also written about European efforts in another blog post.

How do journals enlarge their impact factors artificially?

For another look at the mechanics of the flawed methodology of journal impact factor ratings, Dalmeet Singh Chawla gives a quick overview of some intriguing findings by Vincent Lariviere and Cassidy Sugimoto about practices used by journal publishers to boost their impact factors.

Niles and her team state that these research results and these kinds of discussions may assist academic researchers to see the benefit “that their peers share their values, rather than in changing the values themselves”. In other words, there is an ‘elephant in the room’ that people try to ignore. But it is there, nevertheless.

Menu