Predicting faking in interviews with automated text analysis and personality
dc.contributor.author | Holtrop, Djurre | |
dc.contributor.author | Van Breda, Ward | |
dc.contributor.author | Oostrom, Janneke | |
dc.contributor.author | De Vries, Reinout | |
dc.date.accessioned | 2019-07-03T03:53:46Z | |
dc.date.available | 2019-07-03T03:53:46Z | |
dc.date.issued | 2019 | |
dc.identifier.citation | Holtrop, D. and van Breda, W. and Oostrom, J. and de Vries, R. 2019. Predicting faking in interviews with automated text analysis and personality, in Proceedings of the EAWOP Congress, May 29-Jun 1 2019. Turin: European Association of Work and Organizational Psychology (eawop). | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/75891 | |
dc.description.abstract |
INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detect faking. DESIGN/METHOD: 140 students from the University of Western Australia were instructed to behave as an applicant. First, participants completed a personality questionnaire. Second, they were given 12 personality-based interview questions to read and prepare. Third, participants were interviewed for approximately 15-20 minutes. Finally, participants were asked to—honestly—indicate to what extent they had verbally (α=.93) and non-verbally (α=.77) faked during the interview. Subsequently, the interview text transcripts (M[words]=1,755) were automatically analysed with text-mining software in terms of personality-related words (using a program called Sentimentics) and 10 other hypothesised linguistic markers (using LIWC2015). RESULTS: Overall, the results showed very modest relations between verbal faking and the text-mining programs’ output. More specifically, verbal faking related to the linguistic categories ‘affect’ (r=.21) and ‘positive emotions’ (r=.21). Altogether, the personality-related words and linguistic markers predicted a small amount of variance in verbal faking (R2=.17). Non-verbal faking was not related to any of the text-mining programs’ output. Finally, self-reported personality was not related to any of the faking behaviours. LIMITATIONS/PRACTICAL IMPLICATIONS: The present study shows that linguistic analyses with text-mining software is unlikely to detect fakers accurately. Interestingly, verbal faking was only related to positive affect markers. ORIGINALITY/VALUE: This puts the use of text-analysis software on job interviews in question. | |
dc.title | Predicting faking in interviews with automated text analysis and personality | |
dc.type | Conference Paper | |
dcterms.source.conference | EAWOP 2019 | |
dcterms.source.conference-start-date | 29 Jun 2019 | |
dcterms.source.conferencelocation | Turin | |
dcterms.source.place | Turin | |
dc.date.updated | 2019-07-03T03:53:46Z | |
curtin.department | Future of Work Institute | |
curtin.accessStatus | Fulltext not available | |
curtin.faculty | Faculty of Business and Law | |
curtin.contributor.orcid | Holtrop, Djurre [0000-0003-3824-3385] | |
dcterms.source.conference-end-date | 1 Jun 2019 |