Expert Failure: Re-evaluating Research Assessment
Citation
Source Title
ISSN
Faculty
School
Collection
Abstract
EDITORIAL
© 2013 Eisen et al.
Funding organisations, scientists, and the general public need robust and reliable ways to evaluate the output of scientific research. In this issue of PLOS Biology, Adam Eyre-Walker and Nina Stoletzki analyse the subjective assessment and citations of more than 6,000 published papers [1]. They show that expert assessors are biased by the impact factor (IF) of the journal in which the paper has been published and cannot consistently and independently judge the “merit” of a paper or predict its future impact, as measured by citations. They also show that citations themselves are not a reliable way to assess merit as they are inherently highly stochastic. In a final twist, the authors argue that the IF is probably the least-bad metric amongst the small set that they analyse, concluding that it is the best surrogate of the merit of individual papers currently available.
Related items
Showing items related by title, author, creator and subject.
-
Rigoli, Daniela (2012)Over the past three decades, increasing attention has been paid to the importance of motor competence in relation to other areas of a child’s development, including cognitive functioning, academic achievement, and emotional ...
-
Durand, Robert; Newby, R.; Tant, K.; Trepongkaruna, S. (2013)Purpose – The purpose of this paper is to systematically profile investors’ personality traits toexamine if, and how, those traits are associated with phenomena observed in financial markets. Inparticular, the paper looks ...
-
Oloruntoba, Richard ; Banomyong, R. (2018)© 2018, Richard Oloruntoba and Ruth Banomyong. Purpose: This “thought paper” is written by the special issue editors as a part of the five papers accepted and published in response to the special issue call for papers ...