Show simple item record

dc.contributor.authorMcMeekin, David
dc.contributor.authorvon Konsky, Brian
dc.contributor.authorRobey, Michael
dc.contributor.authorCooper, David
dc.contributor.editorPaul Strooper
dc.contributor.editorDavid Carrington
dc.date.accessioned2017-01-30T14:32:42Z
dc.date.available2017-01-30T14:32:42Z
dc.date.created2010-02-07T20:02:24Z
dc.date.issued2009
dc.identifier.citationMcMeekin, David and von Konsky, Brian and Robey, Michael and Cooper, David. 2009. The significance of participant experience when evaluating software inspection techniques, in Paul Strooper and David Carrington (ed), 20th Australian Software Engineering Conference (ASWEC 2009), Apr 14 2009, pp. 200-209. Gold Coast, Australia: IEEE Computer Society.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/39323
dc.description.abstract

Software inspections have been used to improve software quality for 30 years. The Checklist Based Reading strategy has traditionally been the most prevalent reading strategy. Increased Object Oriented usage has raised questions regarding this techniques efficacy, given issues such as delocalisation. This study compared two inspection techniques: Use-Case Reading and Usage-Based Reading, with Checklist Based Reading. Students and industry professionals were recruited to participate in the study. The effectiveness of each reading strategy was analysed, and the effect experience had on inspection efficacy. The results showed no significant difference between inspection techniques,whether used by student or professional developers but a significant difference was identified between student and professional developers in applying the different techniques. Qualitative results highlighted the differences in ability between industry and students with respect to what each group considered important when inspecting and writing code. These results highlight the differences between student and industry professionals when applying inspections. Therefore, when selecting participants for empirical software engineering studies, participant experience level must be accounted for within the reporting of results.

dc.publisherIEEE Computer Society
dc.titleThe significance of participant experience when evaluating software inspection techniques
dc.typeConference Paper
dcterms.source.startPage200
dcterms.source.endPage209
dcterms.source.titleProceedings of the 20th Australian software engineering conference (ASWEC 2009)
dcterms.source.seriesProceedings of the 20th Australian software engineering conference (ASWEC 2009)
dcterms.source.isbn9780769535999
dcterms.source.conference20th Australian Software Engineering Conference (ASWEC 2009)
dcterms.source.conference-start-dateApr 14 2009
dcterms.source.conferencelocationGold Coast, Australia
dcterms.source.placeAustralia
curtin.note

Copyright © 2009 IEEE This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

curtin.departmentCentre for Extended Enterprises and Business Intelligence
curtin.accessStatusOpen access
curtin.facultyCurtin Business School
curtin.facultyThe Digital Ecosystems and Business Intelligence Institute (DEBII)


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record