The significance of participant experience when evaluating software inspection techniques
dc.contributor.author | McMeekin, David | |
dc.contributor.author | von Konsky, Brian | |
dc.contributor.author | Robey, Michael | |
dc.contributor.author | Cooper, David | |
dc.contributor.editor | Paul Strooper | |
dc.contributor.editor | David Carrington | |
dc.date.accessioned | 2017-01-30T14:32:42Z | |
dc.date.available | 2017-01-30T14:32:42Z | |
dc.date.created | 2010-02-07T20:02:24Z | |
dc.date.issued | 2009 | |
dc.identifier.citation | McMeekin, David and von Konsky, Brian and Robey, Michael and Cooper, David. 2009. The significance of participant experience when evaluating software inspection techniques, in Paul Strooper and David Carrington (ed), 20th Australian Software Engineering Conference (ASWEC 2009), Apr 14 2009, pp. 200-209. Gold Coast, Australia: IEEE Computer Society. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/39323 | |
dc.description.abstract |
Software inspections have been used to improve software quality for 30 years. The Checklist Based Reading strategy has traditionally been the most prevalent reading strategy. Increased Object Oriented usage has raised questions regarding this techniques efficacy, given issues such as delocalisation. This study compared two inspection techniques: Use-Case Reading and Usage-Based Reading, with Checklist Based Reading. Students and industry professionals were recruited to participate in the study. The effectiveness of each reading strategy was analysed, and the effect experience had on inspection efficacy. The results showed no significant difference between inspection techniques,whether used by student or professional developers but a significant difference was identified between student and professional developers in applying the different techniques. Qualitative results highlighted the differences in ability between industry and students with respect to what each group considered important when inspecting and writing code. These results highlight the differences between student and industry professionals when applying inspections. Therefore, when selecting participants for empirical software engineering studies, participant experience level must be accounted for within the reporting of results. | |
dc.publisher | IEEE Computer Society | |
dc.title | The significance of participant experience when evaluating software inspection techniques | |
dc.type | Conference Paper | |
dcterms.source.startPage | 200 | |
dcterms.source.endPage | 209 | |
dcterms.source.title | Proceedings of the 20th Australian software engineering conference (ASWEC 2009) | |
dcterms.source.series | Proceedings of the 20th Australian software engineering conference (ASWEC 2009) | |
dcterms.source.isbn | 9780769535999 | |
dcterms.source.conference | 20th Australian Software Engineering Conference (ASWEC 2009) | |
dcterms.source.conference-start-date | Apr 14 2009 | |
dcterms.source.conferencelocation | Gold Coast, Australia | |
dcterms.source.place | Australia | |
curtin.note |
Copyright © 2009 IEEE This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. | |
curtin.department | Centre for Extended Enterprises and Business Intelligence | |
curtin.accessStatus | Open access | |
curtin.faculty | Curtin Business School | |
curtin.faculty | The Digital Ecosystems and Business Intelligence Institute (DEBII) |