The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research. Its report, ‘The Metric Tide’, was published in July 2015 and is available below.
This publication was supported by: Research Information Network, Society for Endocrinology, Vitae, Institute for Physics and Engineering in Medicine, The Association for Clinical Biochemistry and Laboratory Medicine, Elsevier, Sage, PRE (Peer Review Evaluation), Medical Research Council, The Physiological Society, Wiley, Society for General Microbiology, BioMed Central, PLOS, Taylor and Francis and Society for Applied Microbiology.
Reprinted in 2014 with support from BioMed Central, Elsevier, PLOS, Taylor and Francis, Wiley and PRE (Peer Review Evaluation).
A guide to peer review written for early career researchers.
This is a nuts and bolts guide to peer review for early career researchers written by members of the VoYS network. Using a collection of concerns raised by their peers, the VoYS writing team set off to interview scientists, journal editors, grant bodies’ representatives, patient group workers and journalists in the UK and around the world to find out how peer review works, the challenges for peer review and how to get involved.
This is a blog item, published by Welcome, which is about a new initiative that allows researchers to cite preprints in their grant applications. Central Service for Preprints allows researchers to deposit their preprints – complete and public drafts of scientific documents, not yet certified by peer review – to:
This is a blog item, published by WIRED, which is about the story of a neuroscientist named Niko Kriegeskorte, a cognitive neuroscientist at the Medical Research Council in the UK who, since December 2015, has performed all of his peer review openly. That means he publishes his reviews as he finishes them on his personal blog—sharing on Twitter and Facebook, too—before a paper is even accepted.
This is part two of a series of posts describing OpenAIRE’s work to find a community-endorsed definition of “open peer review” (OPR), its features and implementations. As described in Part One, OpenAIRE collected 122 definitions of “open review” or “open peer review” from the scientific literature. Iterative analysis of these definitions resulted in the identification of seven distinct OPR traits at work in various combinations amongst these definitions:
At present, there is neither a standardized definition of “open peer review” (OPR) nor an agreed schema of its features and implementations, which is highly problematic for discussion of its potential benefits and drawbacks. This new series of blog posts reports on work to resolve these difficulties by analysing the literature for available definitions of “open peer review” and “open review”. In all, 122 definitions have been collected and codified against a range of independent OPR traits, in order to build a coherent typology of the many different adaptations to the traditional peer review that has come to be signified by the term OPR and hence provide a unified definition.
The use of journal hierarchy for assessing the reputation of research works and their authors has contributed to a competitive environment that is having a detrimental effect on scientific reliability. Open access repositories administered by Universities or research organizations are a valuable infrastructure that could support the transition to a more collaborative and efficient scholarly evaluation and communication system. Open Scholar has coordinated a consortium of six partners to develop the first Open Peer Review Module (OPRM) for institutional repositories. The module integrates an overlay peer review service, coupled with a transparent reputation system, on top of institutional repositories. It is provided freely as open source software.
This initiative shares a vision of an independent, democratic academic evaluation model free from the conflicts of interest imposed by the agendas of journals and their commercial publishers. It aims to promote complementary strategies to comprise the ingredients needed to attain this goal and to encourage scholars and interested parties to experiment with new modes that can assist the transition to free, independent, open and transparent peer review. In addition, it considers that any platform developed to implement free and open peer review should be independent of intermediaries. To mitigate potential conflicts of interest such platforms should ideally be under the management of an open community, be open source and operate in a non-profit manner.
This is a collection of Blog items, published on nature.com, dedicated to Peer Review.
Richard D. Morey, Christopher D. Chambers, Peter J. Etchells, Christine R. Harris, Rink Hoekstra, Daniel Lakens, Stephan Lewandowsky, Candice Coker Morey, Daniel P. Newman, Felix D. Schonbrodt, Wolf Vanpaemel, Eric-Jan Wagenmakers, Rolf A. Zwaan
Openness is one of the central values of science. Open scientific practices such as sharing data, materials and analysis scripts alongside published articles have many benefits, including easier replication and extension studies, increased availability of data for theory-building and meta-analysis, and increased possibility of review and collaboration even after a paper has been published. Although modern information technology makes sharing easier than ever before, uptake of open practices had been slow. We suggest this might be in part due to a social dilemma arising from misaligned incentives and propose a specific, concrete mechanism—reviewers withholding comprehensive review—to achieve the goal of creating the expectation of open practices as a matter of scientific principle.