Do "lie scales" measure lying? Evaluating applicant honesty on social desirability scales
dc.contributor.author | Hughes, Angus | |
dc.contributor.author | Holtrop, Djurre | |
dc.contributor.author | Dunlop, Patrick | |
dc.contributor.author | Steedman, Grace | |
dc.contributor.author | Chan, Joan | |
dc.date.accessioned | 2019-10-11T06:20:39Z | |
dc.date.available | 2019-10-11T06:20:39Z | |
dc.date.issued | 2019 | |
dc.identifier.citation | Hughes, A. and Holtrop, D. and Dunlop, P. and Steedman, G. and Chan, J. 2019. Do "lie scales" measure lying? Evaluating applicant honesty on social desirability scales. In: APS 13th Industrial and Organisational Psychology Conference 2019, 11th Jul 2019, Adelaide. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/76543 | |
dc.description.abstract |
Aim: Between 30-50% of job applicants are thought to ‘fake’ on personality measures, representing a challenge to the validity of these assessments by practitioners in the selection process (Griffith, Chmielowski, & Yoshita, 2007). Social desirability (SD) scales, sometimes known as ‘lie scales’, are frequently embedded in self-reported personality measures to identify job applicants responding dishonestly (Goffin & Christiansen, 2003). These scales are scored by tallying the number of positive responses to excessively virtuous and unlikely items (e.g. “There has never been an occasion when I took advantage of someone else”). However, there remains debate in the literature as to how effective SD scales really are in identifying dishonest responding. For example, De Vries et al. (2018) have argued that items in SD scales may not be read literally, but are instead endorsed in good faith to signal authentic virtuosity. We therefore tested three central hypotheses using a novel speak-aloud protocol methodology: A) Are item responses keyed as “dishonest” by SD scales evaluated as less honest?; B) Are overall SD scale scores associated with lower honesty on responses to personality scale items?; C) Are these effects increased in high-stakes job application settings? Design: We collected data in two phases. In the first phase, we recruited participants to complete a questionnaire including twelve personality items (from the HEXACO-PI-R; Lee & Ashton, 2004) and a 17-item SD scale (SDS-17; Stöber et al., 2001), 29 items in total. The participants were randomly assigned to a low-stakes honest responding condition, or a high-stakes simulated job application condition (psychology clinic receptionist role). Participants in the high-stakes condition were told the most successful applicant would receive a voucher worth $150AUD. While completing the questionnaire, we applied a method termed a ‘think aloud protocol’. All participants were instructed to verbalise their reasoning and response processes, while these spoken thoughts were recorded. In the second phase, these recordings were transcribed verbatim and presented to 3 to 5 independent judges per questionnaire participant. The judges were asked to rate the speak-aloud participant’s honesty on a five-point scale based on 1) the item, 2) the spoken thoughts, and 3) the chosen response to the item. Method: After removing speak-aloud participants failing the manipulation check in the high-stakes condition, and judges exhibiting careless responding, 46 speak aloud participants (Age M = 19.6, SD = 3.4; 76% female; 24 low-stakes, 22 high-stakes) completed the questionnaire while verbalising their answers, and 174 judges (3-5 per participant, M = 3.82) evaluated their responses. The three central hypotheses were tested using multilevel models in ‘lme4’ for R (Bates, Maechler, Bolker & Walker, 2015). Models evaluated honesty per item as the dependent variable, and condition (high vs low stakes) as a fixed effect independent variable. Hierarchical random effects accounting for variance between judges clustered within participants were specified, as well as crossed random effects accounting for variance between items. Significance tests for independent variables were conducted using Satterthwaite approximations for degrees of freedom. Results: A) Speak-aloud participants in either condition providing a response to items keyed as “dishonest” did appear to receive lower evaluations of honesty by judges compared to those providing “honest” responses, equating to roughly .77 points less on the five-point honesty scale (t = -7.714, p <.001). B) Each speak-aloud participant’s overall SDS-17 scale did not appear to be associated with judges’ evaluations of their honesty on responses to the personality items in either condition (β = -.027, SE = .019, t = -1.42, p = .157). C) There was no evidence of an effect of stakes on evaluated honesty (p = .132), nor of an interaction between high-stakes and ‘keyed’ SD responses (p = .248) or overall SDS-17 score on honesty (p = .100). Conclusion: SD scales are commonly included in personality questionnaires for job applicants to screen for deceptive responding. Independent judges in our study, tasked with evaluating individuals’ actual thought-process and rationalisation behind responding to SD scales, suggest that these scales may not necessarily be effective in this goal. While participants providing keyed “dishonest” responses (following the logic of the SD scale) to SDS items were evaluated as less honest by the judges compared to participants providing keyed “honest” responding, the overall SD scale score was not predictive of perceived dishonesty by participants to the personality questionnaire in either low-stakes or high-stakes applicant conditions. Our results may be limited by a potentially weak manipulation that saw few differences in between high-stakes and low-stakes, although previous think-aloud evidence suggests that generally honest responding to personality measures is not necessarily uncommon in job application scenarios (Robie, Brown & Beaty, 2007). Practitioners using SD scales should therefore administer and interpret results with a high degree of caution, given the lack of predictive ability these scales have for dishonest responding. | |
dc.subject | 1503 - Business and Management | |
dc.subject | 1701 - Psychology | |
dc.title | Do "lie scales" measure lying? Evaluating applicant honesty on social desirability scales | |
dc.type | Conference Paper | |
dcterms.source.conference | APS 13th Industrial and Organisational Psychology Conference 2019 | |
dcterms.source.conference-start-date | 11 Jul 2019 | |
dcterms.source.conferencelocation | Adelaide | |
dc.date.updated | 2019-10-11T06:20:39Z | |
curtin.department | Future of Work Institute | |
curtin.accessStatus | Open access | |
curtin.faculty | Faculty of Business and Law | |
curtin.contributor.orcid | Holtrop, Djurre [0000-0003-3824-3385] | |
dcterms.source.conference-end-date | 13 Oct 2019 | |
curtin.contributor.scopusauthorid | Holtrop, Djurre [56125886000] |