An increasing number of companies use artificial intelligence to evaluate the personality of job candidates during interviews - often without candidates' knowledge

Artificial intelligence is ubiquitous in corporate recruiting, including Applicant Tracking System (ATS) software that scans resumes for relevant keywords, algorithms that identify and connect with passive candidates on social media via bots, and virtual reality interviewing.

Now, you can add real-time personality screening to that list.

An increasing number of companies conduct these screenings during interviews—often without candidates' knowledge. The software supposedly measures personality traits, such as enthusiasm, calmness, anxiety and irritability by mapping a candidate’s facial expression and matching it with a database of personality types.

Anyone who has hired a candidate whose skills and qualifications were impressive, but who failed at collegiality knows that cultural compatibility can have a significant impact on employee morale and productivity. As a result, the potential AI-enhanced benefit of weeding out incompatible, or downright unpleasant people, is seductive.  And let’s be honest—job interviewing is already a deeply flawed process, based entirely on the sometimes random questions interviewers ask (which can sound like pseudo-psychology), and the candidates’ typically rehearsed responses.

But as with most technological advances, there are ethical and legal implications surrounding AI-driven personality screening to consider. In 2018, U.S. Senators Kamala Harris, Elizabeth Warren and Patty Murray wrote to the Equal Employment Opportunity Commission to register their concerns about the potential for facial-analysis software to foster racial, gender or age bias, citing the American Civil Liberties Union’s claim that facial recognition-software incorrectly identified 28 members of Congress as having been arrested.

Accuracy, then, is one concern with an AI system that claims to identify undesirable personality types. How might the software’s conclusions be distorted by somebody having a bad day, or somebody who didn’t get much sleep the night before, or a person fighting a cold? And what about individuals who may have a medical condition, such as Bell’s palsy, that could negatively impact the software’s analysis of their personality?

Many AI-based recruitment tools, some argue, “have emerged as technological innovations, rather than from scientifically-derived [psychometric] methods…as a result, it is not always clear…why they may be expected to predict job candidates’ performance.”

While human-resources professionals usually educate hiring managers about which questions they can’t ask during a job interview, AI software can provide employers similar information that has been traditionally considered private, without the candidate’s knowledge or consent, which raises ethical considerations, as well as legal ones, if the information violates federal laws such as the Americans with Disabilities Act.

As an employer, I would argue that the perceived upside of personality screening in the workplace is overstated. Sure, no one enjoys working with difficult people, but that’s where one-on-one coaching from an HR pro can have an impact. I recall a colleague with Asperger’s syndrome, who came across as antisocial, but made important contributions to our department, and forced many of us to exercise our little-used tolerance muscle. There’s no doubt that AI personality screening would have eliminated him from consideration.

Artificial Intelligence has greatly improved myriad processes in the workplace, but there’s still a strong argument against its use in recruitment. While the tools save time, they eliminate reliance on human intuition in determining who is best-suited to join our workforce.

Source: wsj.com