Session overview: In considering “where next for the workplace”, it seems certain that personality assessment for hiring and for personal development will continue to play a role. However, it has never been easier for candidates to distort their answers when responding to a questionnaire if they feel there is something at stake. Chat GPT is one example of the “help” available when deciding on an answer. As a profession we may not have long left to pretend that our questionnaires will pass muster in this regard. We must at least consider this threat to personality questionnaires, 360 tools and self-report competency measures.
Objective Personality Tests (OPTs) based on the original work of Ray Cattell in the 1950s, offer a fake-resistant alternative (Ziegler et al., 2007). OPTs are online digital tasks that translate live task behaviour into personality measurement. Participants are unaware of what is being measured.
In this session we will report on three studies we conducted into social desirability using OPTs as an objective comparator for true personality. The OPT we used was Mosaic Personality Tasks. Mosaic has been carefully validated using various personality measures: the NEO questionnaire, biodata and ratings from others.
In study 1, 458 people completed both a personality questionnaire (the NEO ipip) and the OPTs. Firstly, we identified the socially desirable pole on 20 personality scales, using the Big 5 as a guide i.e., Conscientious, Emotionally Stable, Extravert, Agreeable, Open. For each of the 20 scales we then subtracted the questionnaire score from the OPT score, creating a total difference score. We found that this difference score correlated positively and significantly with a separate social desirability measure, and also rarely strongly positively endorsed NEO questions – strengths that most people don’t claim. It also correlated negatively and significantly with rarely strongly negatively endorsed NEO questions - shortcomings that people only occasionally claim.
In study 2, we asked people (spouses, family members, friends, or colleagues) to provide personality ratings for 306 people in study 1. This “peer” rating measure of personality included a social desirability scale. Again, using OPTs to provide an objective comparator for true personality, we constructed a measure of total difference, this time between peer score and OPT score on the same 20 personality scales examined in study 1. We found a significant positive correlation between this difference score and peer social desirability responding.
In study 3 (a small pilot, though we hope to collect more data before conference), we asked people to complete the OPTs and a questionnaire measure of personality, once as themselves, and then as if they were trying to create a favourable impression as a senior management candidate. We found that Big 5 scores on the questionnaire changed in a favourable direction as expected – on average by more than 2 Stens – whereas the OPT scores only moved upwards by a small degree, typically less than 0.5 Stens on average. OPTs may be able to significantly improve on the deliberate social desirability responding that affects personality questionnaires e.g., in a hiring scenario.
Learning objectives: 1. The session offers an insight into the problems social desirability responding can create for practitioners when interpreting personality data for their clients, and how tools such as Chat GPT have potentially magnified these problems.
2.The session will include a live audience participation session on social desirability responding
3. The session outlines a clear solution to these problems via Objective Personality Tests (OPTs). Practitioners now have an alternative way of assessing personality.
4. By combining OPTs and questionnaires when assessing personality, practitioners and individuals can gain fresh insight into personal development and perhaps increased accuracy of measurement of personality when hiring.