The case for using the repeatability coefficient when calculating test-retest reliability
|dc.identifier.citation||Vaz, Sharmila and Falkmer, Torbjorn and Passmore, Anne Elizabeth and Parsons, Richard and Andreou, Pantelis. 2013. The case for using the repeatability coefficient when calculating test-retest reliability. PLoS ONE. 8 (9): e73990.|
The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.
|dc.publisher||Public Library of Science|
|dc.title||The case for using the repeatability coefficient when calculating test-retest reliability|
This article is published under the Open Access publishing model and distributed under the terms of the Creative Commons Attribution License