Published

A team-driven approach to authorship decisions is essential

Towards quantitative, democratic and transparent estimates of author contributions

A team-driven approach to authorship decisions is essential

Authorship position on peer-reviewed publications is the currency of academic science and represents the primary basis for decisions regarding promotion, tenure and funding. It is the underlying reward system that powers individual scientific careers (van Dijk et al. 2014). Given the high stakes of the game, it is therefore no surprise that there is often conflict and unfairness when it comes to authorship decisions.

Despite decades of discussion and the development of strict authorship guidelines, abuse in this area — including coercion tactics, honorary, guest or ghost authorships — persists and may represent the most prevalent and tolerated form of scientific misconduct (Strange 2008).

I have been shocked and disillusioned by the number of stories from friends and colleagues describing authorship injustice and disputes. Apart from resulting in soured academic relationships, inappropriate authorship damages the institution of science in two serious ways: (1) it distorts the associated professional benefits i.e. it dishonestly conveys or unfairly reduces benefits of deserved authorship, and (2) it distorts the ethical responsibilities associated with authorship, which represents an endorsement of the quality and integrity of the authored research (Strange 2008). It is critical that we acknowledge open authorship as a fundamental principle of open science and that we rethink how author contributions are acknowledged and assessed. As I explain below, letting all authors have their say in the matter is a giant leap in the right direction.

Up until now, the response from open science advocates to authorship issues has been an unenforceable push towards greater transparency and calls for more detailed information describing each author’s role. Some journals now include an “Author contributions” section, but these disclosures tend to be vague, strictly qualitative and difficult to verify.

To address the chronic lack of authorship data, a quantitative system (QUAD) (Verhagen et al. 2003) has been proposed, consisting of the percentage contribution of each author within four categories (conception and design, data collection, data analysis and conclusion and manuscript preparation), with optional weighting by the number of authors to obtain an author contribution index (ACI) (Boyer et al. 2017). Despite the attractiveness of such a system to funders and evaluators, adoption by authors and journals has been disappointingly slow since its initial proposal 15 years ago. Going a step further, others argue for the complete removal of author names from the article header in order to shift focus to the contributions themselves.

However, QUAD, ACI, and the more verbose author contribution systems all fail to solve the underlying problem: who is responsible for deciding and declaring author contributions?

Even if detailed author contribution statements by the study lead (or corresponding author) were made mandatory, this would not likely have much of an equalising effect on the underlying power structure that ultimately gives rise to injustice in authorship decisions in the first place. On the other hand, there are obvious and well-studied biases related to authors simply declaring their own contributions (Ilakovac et al. 2007).

My proposal is to tackle the problem of authorship using a crowd-based approach in which all authors, as a group, decide on the relative contributions of each author. In short, the approach involves a simple blind internal peer-review of all authors’ self-declared contributions. As part of the publication process, all authors would be asked to do the following: (1) briefly summarise their own contributions and (2) anonymously judge — with a score between 0 and 10 —  the value of every author’s stated contributions (including their own) to the overall work within the four QUAD categories described above. Averaging these team-derived scores within each category for all participants will remove biases related to self-reporting while simultaneously democratising authorship declarations.

These “normalised” contributions will be used to estimate the total contribution of each author to the study, and form the basis of an algorithm to recommend a fair linear author list (if not replace it altogether). Importantly, quantitative estimates of author contributions obviate the need for strict guidelines indicating who should or should not be an author (as the scores will speak for themselves).

There is now ample evidence that aggregating over many individually biased estimates provides high quality judgements. This collective intelligence, or wisdom of crowds, has demonstrated meaningful results even with less than 10 participants (Wagner and Ayoung Suh). With the rise of collaborative science, contributors of this number are common and performance improvements are expected for larger author lists.

I recognise that the idea of reducing the respective contributions of authors involved in a complex — perhaps multi-year — project to a few “cold” numbers might seem like an impossible task. However, without practical alternatives, funding agencies and faculty boards faced with the cryptic nature of the traditional linear author list have little alternative than to do something very similar, i.e. count the number of first or last author publications as a measure of scientific achievement.

My proposal is an anonymous internal peer-review of authors’ own statements that takes advantage of the wisdom of crowds to obtain quantitative, democratic and transparent estimates of author contributions. All stakeholders in science stand to benefit from greater openness in authorship. Beyond resolving authorship disputes and easing the task of scholarly evaluation, governing bodies and institutional policy makers will be better armed with the necessary data to foster collaboration between scientists, improve career diversity and funding for under-appreciated roles. Perhaps most importantly, the patronage of taxpayers and private benefactors is more likely to go towards better science and worthy scientists. There is simply too much at stake for decisions about authorship to be left to a single person.

(I am currently developing a web application along these lines at www.authorwise.org)

REFERENCES:

Boyer S, Ikeda T, Lefort M-C, Malumbres-Olarte J, Schmidt JM. 2017. Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles. Research Integrity and Peer Review 2017 2:1 2: 18.

Ilakovac V, Fister K, Marusic M, Marusic A. 2007. Reliability of disclosure forms of authors' contributions. CMAJ 176: 41–46.

Strange K. 2008. Authorship: why not just toss a coin? American Journal of Physiology-Cell Physiology 295: C567–C575.

van Dijk D, Manor O, Carey LB. 2014. Publication metrics and success on the academic job market. Current Biology 24: R516–R517.

Verhagen JV, Wallace KJ, Collins SC, Scott TR. 2003. QUAD system offers fair shares to all authors. Nature 426: 602.

Wagner C, Ayoung Suh. The Wisdom of Crowds: Impact of Collective Size and Expertise Transfer on Collective Performance. pp. 594–603, IEEE.

Leave a comment

You are commenting as guest. Optional login below.

Unless otherwise indicated, content hosted on OpenUP Hub is licensed under an Attribution 4.0 International (CC BY 4.0).