• Home
  • Community
  • Blog
  • Impact Assessment
  • Assessing Scientists By Publications And Impact Factor Is One Of The Most Harmful Scientific Practices
Published

Assessing Scientists By Publications And Impact Factor Is One Of The Most Harmful Scientific Practices

Moving away from assessing scientists by publications and impact factor can benefit the entire scientific community and will solve multiple problems

There are many problems with current scientific practices. These range from high levels of stress, poor use of funding, discrimination, irreproducibility and opaque results. I want to focus on a single problem, the practice of assessing scientists on the basis of publications and, in particular, on the impact factor of the journal of those publications. I choose to focus on this aspect because I think it is one of the most harmful scientific practices and the driving force behind several other problems.

What problems do I believe stem, partially or fully, from the tendency to evaluate scientists by publications and impact factor? It creates poor incentives which can lead to misconduct, it misrepresents the true impact of a paper, promotes a narrow-minded view of science, perpetuates inequality, can lead to student/supervisor conflict and is a poor metric for evaluation. In order to keep this brief and since I believe the first problem has been extensively discussed, I will just give a short treatment of the second and third problems before talking about solutions.

The True Impact Of A Paper

One of the troubling aspects of the impact factor is that it dominates what we mean when we talk about a "high-impact" paper and can distort what actually matters. The true impact of a paper should be measured by the effect it has on the world and the lives of the beings living in it. It should not be measured by citation numbers, although at times they will go hand-in-hand.

One of the reasons Chavarro et al.(Chavarro, Tang, & Ràfols, 2017)⁠ found that researchers publish in non-mainstream journals was to fill knowledge gaps. "High-impact" journals have very specific topics and methods that they like and it can be difficult to publish research that they consider "boring" at a particular time. This "boring" research may cover topics that are globally relevant but not trendy or topics which are important to a specific geographical region or community. While this research may lack a high impact factor, it can have a substantial impact on the daily lives of members of that specific community which some "high-impact" research will not have.

A researcher focussing on a subject that is relevant to a small geographical region might not get a lot of "high-impact" papers but that does not mean that they are a worse researcher than one that does. The sort of impact that a paper detailing GWAS on various natural populations will have versus a paper on managing community water resources will be very different, but if the latter paper can immediately improve the lives of a community then that is certainly a type of impact that we should acknowledge and reward.

A Broader View Of Science

A second problematic aspect of evaluation by papers and impact factor is that it creates a narrow-minded view of science. Research output, as measured by journal articles, is certainly important but it is only one aspect of what it means to be a scientist and to do science. Other facets of being a scientist include training and mentoring students, reviewing papers, communicating with the public, and creating and maintaining tools for other scientists (e.g. software, techniques and strains of model organisms). In some cases, these aspects are taken up by specialists but, on average, all these functions and more fall under the responsibility of practising scientists and probably need to.

There is a danger that, if these aspects of science are not rewarded or recognised, that they will be neglected in favour of activities that purely lead to paper production. The entire scientific community loses out when new researchers are poorly trained or when necessary software is neither properly documented nor maintained. I'm sure most scientists are frustrated when public opinion differs strongly from scientists', particularly for GMOs, evolution and climate change, but this is partly our fault. If we do not engage with the public or reward those that do, then the only people that are communicating with them are the anti-science groups. We should not make scientists choose between their research career and other functions such as public engagement and maintenance of common goods.

Directions For The Future

There is an ongoing discussion about publication practices in science and I can not yet offer a final, tangible solution. There are considerable hurdles still to be overcome in order for reform to proceed but there are a number of steps that can move us in the right direction. The first thing that needs to be done is to stop relying on impact factor assessments and making evaluation criteria clear. This can be facilitated by the San Francisco Declaration on Research Assessment. Other aspects of being a scientist should be rewarded, such as having public commitments to social responsibility or awards for engaged scholarship and social responsiveness. Initiatives like Publons and Altmetrics, although similar to impact factor, offer an opportunity to recognise and reward wider scientific outputs and impacts.

Is this even something we should care about? Absolutely! As I have pointed out, there are many problems stemming from evaluation by publications and impact factor. Moving away from that will bring many benefits. Without unnecessary pressure to worry about impact factors, there is less reason to try and game the system. Taking a broader focus means that scientists can accomplish their goals in different ways. Maybe some will put more emphasis on teaching and training future researchers and others will take those students and drive new discoveries and innovation. Other labs could mix or spend more time developing and maintaining the software that allows this research without having to worry that those efforts will not be rewarded. There are many aspects to science and only by tending to all of them can we truly see the greatest potential realised.

Further Reading

Chavarro, D., Tang, P., & Ràfols, I. (2017). Why researchers publish in non-mainstream journals: Training, knowledge bridging, and gap filling. Research Policy, 46(9), 1666–1680. http://doi.org/10.1016/j.respol.2017.08.002

Acknowledgements

This essay represents an early part of a larger project on publishing practices and peer review on which I am working with Anoop Kavirayani and I am grateful for his input on this essay.

Leave a comment

You are commenting as guest. Optional login below.

Unless otherwise indicated, content hosted on OpenUP Hub is licensed under an Attribution 4.0 International (CC BY 4.0).