Dr. Stewart Desson, Lumina
18th June 2019
Reducing Evaluative Bias in Psychometrics – Implications for Recruitment and Development.
The presentation covered the following themes:
- Type-ing, trait theories, and inclusive psychometrics
- Evaluative Bias
- The Lumina Spark model
- How does Lumina Spark reduce evaluative bias? And what are the implications?
- Personality Dynamics
- How does Lumina Spark assess personality in different contexts?
- How does Lumina Spark embrace paradoxes?
Is Type-ing really dead?
The problem stems from unreliable psychometrics. McCarley and Carskaden in 1983 found that 50% of participants of a type-ing personality assessment received a different classification on one or more dimensions when re-tested, and that of those with a borderline classification, 29% moved across the divide, giving errors in results. Walck (1992) discovered that 1/3 of his sample participants believed themselves to be mislabelled by type-ing personality assessments.
Lorr (1991) using cluster analysis, failed to reproduce the MBTI model of 4 independent scales. This showed that Scales are not independent, contrary to what the MBTI model purports to demonstrate. This result is corroborated by others.
Despite the mounting evidence of the superiority of more evidence based methods of assessment, types are still popular as the concepts are straightforward for busy clients to assimilate, despite the risk of bias and unreliability.
So what is evaluative Bias and how does it creep into evaluation?
“When an evaluatively unbalanced set of descriptors such as the Big Five adjectival markers is subjected to a simple structure rotation algorithm, the resulting factors almost invariably end up contrasting positive versus negative descriptors” (Goldberg, 1992)
Quite simply, Evaluative Bias emerges as a result of over emphasis on certain traits in assessment, or certain traits being valued more than others: “you have to be extroverted to survive here”. The starting point for removing bias is to acknowledge that as human beings we are complex and can have more than mood swings: we can be both extroverted and introverted depending on environment and context. We then need a psychometric which can accommodate both, be more inclusive and address the problems of bias.
Although bias was identified by Francis Galton as far back as 1868 when he observed national traits being used excessively in stereotyping, the first evidence based research was only carried out by Peabody in 1967. Reducing Bias has since then assumed an almost religious fervour, focusing on individual and team inclusivity and improving performance through valuing all aspects of their personality.
How does Lumina Spark reduce Evaluative Bias?
It measures both ends of each personality spectrum (e.g. extraversion-introversion), adaptive and maladaptive. Diagrammatically this can be represented as a circumplex (the example of agreeableness and openness was used) and this can be mapped on to the Periodic Table of Personality (Woods and Anderson 2016).
The effect of evaluatively neutral questions on “social desirability” in Lumina Spark is assessed in Desson (2017). Comparisons in the measurements of opposite polarities were made between Lumina Spark and IPIP-NEO in openness, conscientiousness, agreeableness, and extraversion. The % difference was much lower overall in Lumina Spark.
The final piece in the evaluation was to draw a comparison with OPQ-32 as drawn up by Bartram in 2005 and he showed that using similar techniques the correlations were comparable and within acceptable margins of error.
The implication of the Reduced Evaluative Bias is that it is a real breakthrough in measurement techniques. Not only is Predictive Validity maintained but Predictive Validity with both ends measured separately is identical (in this case .38 obtained by correlational analysis), while construct validity is improved through a higher fidelity approach to personality assessment.
Dr. Desson provided a wealth of evidence based data to support these outcomes and conclusions.
How does Lumina Spark assess personality in different contexts?
Rather like the atmosphere of Jupiter, personality is a dynamic which is permanently changing. This means that for meaningful results a range in relevant and variable contexts is necessary.
“Historically, organisational and personality psychologists have ignored within-individual variation in personality across situations or have treated it as measurement error.” Judge, Simon, Hurst, & Kelley (2014)
It takes the three pillars of Trait Activation Theory (behavioural arousal of Traits are situational), situation strength (potential behaviours drawn out by implicit or explicit cues) and personality strength (inherent forcefulness of behaviours) to make comparative measurements.
It then analyses qualities in different situational contexts, underlying, everyday and overextended. It reviews four basic patterns between the three personas: key qualities, low claimed qualities, Conscious efforts/amplifications and hidden treasures / suppressions. The results can vary from such as “a natural trait but it can be overplayed under pressure” to “you tune this quality up but rarely overplay it”, with relevant numerical scores.
How does Lumina Spark embrace paradoxes?
This is through illustration of Personality Dynamics between opposing qualities. Perhaps the best example of how this is achieved is through a comparison of situational strength and personality strength for a given quality represented on a continuum showing the mean point.
* Type-ing (or simplistic measurement) is dead
* Inclusive psychometrics is long overdue
* Lumina Spark a personality tool which reduces evaluative bias but it does this in a transparent way by measuring both ends and valuing them equally
* Lumina Spark closes perceptions of value between opposite qualities
* Lumina Spark allows for robust predictions while valuing all aspects of personality
* The 3 Personas allows for exploration of personality dynamics across different contexts
* By measuring both ends, we can explore the paradoxes within ourselves
The Big Five is the basis of all research and developments in assessment. To date, however, tools have for the most part looked at each element in isolation. Current and future development is around the micro elements of not just scores in individual elements but how these scores are influenced by other factors and by external environments and how one can use this information to predict individual and team performance among other outcomes.