The Risks of Misusing AI in Assessment: Why Reading Faces Is Not the Same as Reading Minds
Posted by Alisa Sukdhoe
As artificial intelligence (AI) becomes more deeply integrated into HR and talent technologies, we’re seeing rapid developments in AI-driven assessment tools. These range from automated video interview scoring to real-time facial expression analysis claiming to infer personality traits or job fit. While the promise of scalability and efficiency is appealing, we must draw a clear ethical line between what is possible and what is permissible—and, more importantly, what is scientifically valid.
What’s the Risk? When AI Goes Too Far
Some AI systems now claim they can infer a candidate’s personality or emotional state by analysing their facial micro-expressions, eye movements, or vocal tone. These assessments often feed into recruitment or talent decisions—despite serious concerns about their accuracy, fairness, and legality.
The EU AI Act—the world’s most comprehensive AI regulation to date—explicitly prohibits several such uses. Article 5, Chapter II of the Act, classifies as “prohibited” any AI systems that:
“Make use of biometric categorisation systems that categorise natural persons based on sensitive characteristics such as race, political opinions, or sexual orientation.”
— EU AI Act, Article 5(1)(c)
It also bans:
“AI systems that exploit vulnerabilities of a specific group of persons due to their age, disability or socio-economic situation, in a manner that materially distorts the behaviour of a person.”
— EU AI Act, Article 5(1)(b)
In short: inferring personality or employability from facial expressions, accents, or affective states can cross legal and ethical boundaries. These practices risk reinforcing biases, breaching privacy, and leading to discrimination.
Why Facial Expression ≠ Personality Insight
The idea that we can read stable traits from fleeting expressions belongs more in a lie detector myth than evidence-based psychology. As psychologist Lisa Feldman Barrett has shown in her research, emotions are not universally expressed and interpreted; they are constructed and context-dependent. Facial expressions do not map neatly onto discrete emotions, let alone personality dimensions.
In fact, the American Psychological Association (APA) and many scholars have cautioned against the use of “emotion AI” or affective computing for decision-making purposes in hiring, promotion, or mental health.
“There is currently no scientific consensus that people can accurately infer personality or future job performance from short video clips, facial expressions, or voice tone alone.”
— APA Guidelines for AI in the Workplace, 2023
Where AI Is Valuable in Assessment
AI can play a powerful role in enhancing rather than replacing validated psychological assessment. Here’s where it shines:
- Scalable Delivery: AI can support the delivery of scientifically validated psychometric instruments (e.g., Big Five assessments) in a secure and automated fashion.
- Bias Detection: AI can be used to audit assessments for group-level bias across gender, ethnicity, or socio-economic status.
- Scoring and Pattern Recognition: Natural Language Processing (NLP) can help analyse open-ended responses (e.g., work samples, written reflections) using pre-trained, validated scoring frameworks.
- Adaptive Testing: Algorithms can tailor questions to respondent ability levels, improving precision and reducing test length.
But crucially, all of these should be grounded in robust psychological theory and subject to human oversight.
Ethics First: Principles for AI in People Assessment
To stay compliant with both regulations and professional ethics, organisations deploying AI in assessments should adhere to:
- Transparency: Candidates must understand what data is being collected and how it’s being used.
- Scientific Validity: All tools should be grounded in peer-reviewed evidence and demonstrate predictive validity.
- Human Oversight: Final decisions should never be made by AI alone.
- Fairness Auditing: Regular bias audits should be conducted, with adjustments made where inequities are found.
- Respect for Dignity: Candidates should never be reduced to facial tics or vocal patterns.
Final Thought: Let AI Assist, Not Decide
As professionals shaping the future of work, we need to be clear-eyed about both the promise and the perils of AI. Just because we can track someone’s facial muscle movement in a job interview doesn’t mean we should. And it certainly doesn’t mean it reflects their leadership potential, team fit, or future performance.
Let’s use AI to amplify the best of human insight, not replace it with flawed proxies.
Further Reading:
- EU AI Act (2024): Regulation (EU) 2024/0130
- Feldman Barrett, L. (2017). How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt.
- APA Guidelines for AI in the Workplace (2023): www.apa.org
If you're interested in learning more about how BOLDLY can help your organisation, we invite you to explore our website or contact us here.