Show simple item record

dc.contributor.authorHoltrop, Djurre
dc.contributor.authorVan Breda, Ward
dc.contributor.authorOostrom, Janneke
dc.contributor.authorDe Vries, Reinout
dc.date.accessioned2019-07-03T03:53:46Z
dc.date.available2019-07-03T03:53:46Z
dc.date.issued2019
dc.identifier.citationHoltrop, D. and van Breda, W. and Oostrom, J. and de Vries, R. 2019. Predicting faking in interviews with automated text analysis and personality, in Proceedings of the EAWOP Congress, May 29-Jun 1 2019. Turin: European Association of Work and Organizational Psychology (eawop).
dc.identifier.urihttp://hdl.handle.net/20.500.11937/75891
dc.description.abstract

INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detect faking. DESIGN/METHOD: 140 students from the University of Western Australia were instructed to behave as an applicant. First, participants completed a personality questionnaire. Second, they were given 12 personality-based interview questions to read and prepare. Third, participants were interviewed for approximately 15-20 minutes. Finally, participants were asked to—honestly—indicate to what extent they had verbally (α=.93) and non-verbally (α=.77) faked during the interview. Subsequently, the interview text transcripts (M[words]=1,755) were automatically analysed with text-mining software in terms of personality-related words (using a program called Sentimentics) and 10 other hypothesised linguistic markers (using LIWC2015). RESULTS: Overall, the results showed very modest relations between verbal faking and the text-mining programs’ output. More specifically, verbal faking related to the linguistic categories ‘affect’ (r=.21) and ‘positive emotions’ (r=.21). Altogether, the personality-related words and linguistic markers predicted a small amount of variance in verbal faking (R2=.17). Non-verbal faking was not related to any of the text-mining programs’ output. Finally, self-reported personality was not related to any of the faking behaviours. LIMITATIONS/PRACTICAL IMPLICATIONS: The present study shows that linguistic analyses with text-mining software is unlikely to detect fakers accurately. Interestingly, verbal faking was only related to positive affect markers. ORIGINALITY/VALUE: This puts the use of text-analysis software on job interviews in question.

dc.titlePredicting faking in interviews with automated text analysis and personality
dc.typeConference Paper
dcterms.source.conferenceEAWOP 2019
dcterms.source.conference-start-date29 Jun 2019
dcterms.source.conferencelocationTurin
dcterms.source.placeTurin
dc.date.updated2019-07-03T03:53:46Z
curtin.departmentFuture of Work Institute
curtin.accessStatusFulltext not available
curtin.facultyFaculty of Business and Law
curtin.contributor.orcidHoltrop, Djurre [0000-0003-3824-3385]
dcterms.source.conference-end-date1 Jun 2019


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record