Show simple item record

dc.contributor.authorReubenson, Alan
dc.contributor.authorSchneph, Tanis
dc.contributor.authorWaller, Rob
dc.contributor.authorEdmondston, Stephen
dc.date.accessioned2017-01-30T12:28:06Z
dc.date.available2017-01-30T12:28:06Z
dc.date.created2012-11-27T20:00:22Z
dc.date.issued2012
dc.identifier.citationReubenson, Alan and Schneph, Tanis and Waller, Robert and Edmondston, Stephen. 2012. Inter-examiner agreement in clinical evaluation. The Clinical Teacher. 9 (2): pp. 119-122.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/21897
dc.identifier.doi10.1111/j.1743-498X.2011.00509.x
dc.description.abstract

Background: The reliability of assessment is an important issue in the evaluation of competence in medical and allied health practice, particularly when assessments are conducted by multiple examiners. The purpose of this study was to examine the agreement between multiple examiners in the assessment of a postgraduate physiotherapy student using a specifically designed performance evaluation system. Methods: Seven examiners simultaneously watched a recording of a postgraduate student’s examination and treatment of one patient. The Postgraduate Physiotherapy Performance Assessment (PPPA) form was used to guide the assessment of performance in key areas of patient examination and management. Each examiner independently recorded a grade for each of five performance categories, and these scores were used to guide the global performance grade and mark. Results: Five examiners agreed on the global performance grade and four of the performance categories. The level of pass grade awarded was more variable, with scores in the performance categories spanning two grades, and in one case, three grades. The two examiners who were not in agreement with the majority consistently awarded higher grades across most performance categories. Discussion: This preliminary study has demonstrated majority agreement in global performance between multiple examiners when physiotherapy clinical practice is assessed against specific performance standards. Not all examiners awarded global grades consistent with the majority, and there was greater variability between examiners when grading performance in specific aspects of practice. These findings highlight the importance of examiner training and review sessions to improve inter-examiner agreement in assessments of clinical performance that require multiple examiners.

dc.publisherBlackwell Publishing
dc.titleInter-examiner agreement in clinical evaluation
dc.typeJournal Article
dcterms.source.volume9
dcterms.source.startPage119
dcterms.source.endPage122
dcterms.source.issn1743-498X
dcterms.source.titleThe Clinical Teacher
curtin.department
curtin.accessStatusFulltext not available


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record