The top four definitions that will help you tackle the most common misconceptions in scientific publishing.
The term ‘altmetrics’ stands for ‘alternative metrics’ for scholarly output. The term has been initially coined in 2010 by Jason Priem in a tweet, claiming that he would prefer the term over other terms since it implies ‘a diversity of measures’. Shortly after, Priem together with his colleagues published a manifesto in which an understanding of Altmetrics has been coined that influenced the Altmetrics community sustainably: “That dog-eared (but uncited) article that used to live in a shelf now lives in Mendeley, CiteUlike, or Zotero – where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks. This diverse group of activities forms a composite trace of impact far richer than any available before“.
Since then, altmetrics have become a strongly growing topic in bibliometrics, informetrics, and scientometrics. Alternative metrics now resembles several initiatives to find metrics or indicators for scholarly output beyond citations. This trend is reinforced as several digital platforms provide data about downloads, views or shares of output which contains or links scientific information. Finally this has led to massive increase of available information about scientific research online as novel services arise which aim at organizing and communicating scientific research. Currently, there is some discussion as to whether the ‘alt’ part of altmetrics is still useful. Some scholars therefore suggest using the term complementary metrics instead. Nevertheless, altmetrics can be considered as the organizing concept for scholarly impact assessment in the realm of open science.
From the perspective of open science, altmetrics are strongly associated with the impact assessment of scientific research. However, there are several other aspects to which the study of altmetrics can also contribute. Within OpenUP, we aim to provide an overview of how altmetrics have been used and described, in order to inform scholars and various stakeholders of how and when to use certain channels and how they can find relevant categories against which their work might be evaluated. In particular, our aim is to explore how altmetrics are connected to various channels of dissemination of scholarly output beyond publications, such as wikis, online sites for sharing code, or youtube channels. All these channels can reach specific audiences, but it is still unclear as to how they are relevant for the different scholarly communities. Through bibliometric analysis, expert interviews and validation exercises with stakeholder groups, we have developed a taxonomy of relevant categories by which various channels of dissemination can be assessed.
Report of the European Commission Expert Group on Altmetrics. The Expert Group on Altmetrics outlines in this report how to advance a next-generation metrics in the context of Open Science and delivers an advice corresponding to the following policy lines of the Open Science Agenda: Fostering Open Science, Removing barriers to Open Science, Developing research infrastructures and Embed Open Science in society.
This series of posts are authored by some of the team members at Altmetric, a data science company that provides attention data to authors, publishers, institutions and funders.
The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research. Its report, ‘The Metric Tide’, was published in July 2015 and is available below.
Answer from the hero in Leo Szilard's 1948 story “The Mark Gable Foundation” when asked by a wealthy entrepreneur who believes that science has progressed too quickly, what he should do to retard this progress: “You could set up a foundation with an annual endowment of thirty million dollars.
The San Francisco Declaration on Research Assessment (DORA) points out that using the Journal Impact Factor as a proxy measure for the value or quality of specific research and individual scientists leads to biased research assessment. How can we resist misusing metrics?
Page 1 of 2