The dangers of using AI in hiring

Patrick Sturdivant, a software systems engineer who worked at USAA for 34 years and is blind, is generally a fan of artificial intelligence software. 

“I love AI when it's used properly and thoughtfully to include everyone,” he said. “AI can be a great equalizer for the disabled, because the computer can do so much that I can't do.”

But he also worries about the dangers of AI inadvertently discriminating against people with disabilities during recruitment and hiring.

“I do get scared that too much AI is used to filter through resumes and a lot of good people are excluded,” said Sturdivant, who is now principal strategy consultant at Deque Systems. “What scares me even more is the use of AI to screen video interviews to see who's going to make the next cut. There is too much potential there for developers to forget to include people of all backgrounds, all abilities, all races, all colors, all genders. And if they don't do it right, and if they don't test it correctly and perfect it, they're going to hurt some people.” 

Most large employers, including banks, use AI throughout the hiring process to filter through thousands of applications to get to the smaller number of truly qualified candidates. This is sometimes called “predictive hiring.” Across the industry, 51% of banks said they use AI-powered analytics in decision making of all kinds, according to a survey conducted by Arizent that was published earlier this week.

“More and more organizations are using different algorithms to try and create better matches of prospective employees to jobs in their organization,” said Haig Nalbantian, senior partner at Mercer and co-lead of Mercer's Workforce Sciences Institute. “There's been a big uptick.”

This use of AI potentially could reduce or eliminate nepotism and bias from hiring decisions. But it could also perpetuate existing biases or introduce new forms of unfairness, including in the use of gamification, automated analysis of resumes, automated analysis of video interviews and AI-generated pre-employment tests.

Gamification

One AI-based technology Nalbantian has been seeing banks and other large companies use more in hiring is gamification software from companies like Pymetrics and Knack. These use games to measure potential employees’ “soft skills” and match them with job openings.  

In some cases, the games simulate actual work experience. 

“It might involve some problem solving, and the employer through this mechanism will be able to see, how does this individual tackle a problem? What does he or she look at first? Are they attentive to things happening on the screen that may not be central to what they're focused on? In what order do they address particular issues that arise? How do they react to surprises?” Nalbantian said. “They're trying to simulate the work experience to gauge how people actually function when they're forced to make choices in an environment that's gamified.”

The providers of these programs argue that their technology can ensure a company is blind to race and gender.

“In that way, it can offset some of the overt bias that may show up in traditional hiring methods, where I look at a resume and the name, the addresses and the background are all there,” Nalbantian said. Gamification software vendors say they can remove anything that signals the person's race or gender, and therefore remove opportunities for bias. 

Nalbantian said there is some truth to this. Another potential advantage of such software is that it may help companies consider candidates that have the right skills but have not held a particular type of job before. 

“When labor markets are tight and it's hard to keep good talent, being able to expand the potential labor pool to candidates that have adjacent skill sets and experience can be a very positive thing to have access to more people than you otherwise would,” he said. 

But, Nalbantian said, the software’s reliance on computer gaming skills could be problematic.

“You can imagine that less privileged people and candidates who did not grow up playing computer games and did not have computers in their homes will be at a systematic disadvantage,” Nalbantian said. “You end up in effect self-selecting certain types of people with certain backgrounds that may relate to race and gender and that may create disparities.” 

AI-generated pre-employment tests

Most standard pre-employment tests used to be taken on paper, and all candidates took the same test. When such tests were first put online, everyone still got the same list of questions.

Now artificial intelligence is being used to modify qualification tests on the fly, according to Ken Willner, partner at Paul Hastings. The AI decides what the next question ought to be based on how the candidate did on the previous questions. 

“Let's say the test is looking at your geometry skills,” he said. “An artificial-intelligence-aided test might give you a first question on geometry, and then based on how you did on that one, give you either a harder question or an easier question.” 

Because of this, applicants get different questions based on how well they did. 

“That's a way perhaps of learning more precisely about someone's geometry skills, but it lends to difficulty in validating a test,” Willner added. “Everyone's not taking exactly the same test or at least answering exactly the same questions. That's something psychologists have been wrestling with somewhat, these tests that modify themselves based on the applicant.”

A bank would want to make sure its vendor can show that it's measuring the same skill or the same ability, he said. 

Nalbantian is not bothered by the use of AI to assess proficiency levels, especially where jobs require specific knowledge and skill sets. 

“You may have an apprentice level, a professional level and a master level,” he said. If answers to the first few questions identify someone as being at a master level, then it makes sense to ask questions that will gauge where the person is within the spectrum of mastery of the skillset.

Letting AI screen resumes

To weed through thousands of job applicants, large companies typically use application tracking systems from companies like Lever or SAP SuccessFactors that can analyze large volumes of resumes and separate the job candidates that will get a call back from those who won’t. 

Where AI software relies on data on past decisions to screen current applicants, this could lead to perpetuating existing bias in an organization. For instance, if a company has hired people from Ivy League colleges in the past and AI software picks up on that, it could disadvantage people who for socioeconomic reasons were not able to attend an elite college. The employment data of a company that has hired mostly white men in the past will likely lack signals relevant to successful Black and female candidates. 

“Hypothetically speaking, you might find a word like ‘African’ that's being either correlated or negatively correlated with selection, and you sure don't want to have a word that is directly related to a protected characteristic as one of the criteria that is being used in making selections,” Willner said. “If there's bias in the criteria that you're using, then there could be bias in the results. But that's something that companies can and should address to the greatest extent possible by looking at what is being correlated and eliminating anything that shows any potential for bias.” 

For instance, a company may find the word “baseball” correlates to being good at teamwork, but it may not have enough women in its sample for the word “softball” to come up.

“If baseball is going to indicate someone's on the baseball team and therefore is good at teamwork, well, softball would too,” Willner said. “You want to look for things like that and you want to know what's in there and make sure this is not discriminatory and that it is relevant to the job.

The use of AI for video interviews

Most large companies, and some small ones, use video interview technology to conduct video interviews with job candidates. One popular provider, HireVue, is used by many banks including Bank of America, Goldman Sachs, JPMorgan, and Morgan Stanley. 

In 2019, the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission alleging that HireVue uses facial recognition technology and proprietary algorithms to assess job candidates’ cognitive ability, psychological traits, emotional intelligence and social aptitudes. HireVue collects tens of thousands of data points from each video interview of a job candidate, including a candidate’s intonation, inflection, and emotions to predict each job candidate’s employability, EPIC said.

EPIC also said HireVue does not give candidates access to their assessment scores or the training data, factors, logic, or techniques used to generate each algorithmic assessment.

HireVue says it discontinued the facial analysis component of its algorithm in March 2020 after internal research showed that advances in natural language processing had increased the predictive power of language analysis.

“Over time, we realized the minimal value provided by the visual analysis didn’t warrant continuing to incorporate it in the assessments or outweigh the potential concerns,” said Lindsey Zuloaga, chief data scientist at HireVue.

The path to artificial intelligence implementation can be costly and arduous. Are financial leaders reaping the benefits?

July 25

Zuloaga also said that although the company’s technology captures videos for later human review, its artificial intelligence only scores what is said by the candidate using natural language processing and it does not use any visual analysis, such as facial expressions, body language, emotions, or background and surroundings.

“We stopped using video inputs such as facial muscle movements in new models early in 2020, and in 2021 we started to phase out speech inputs,” Zuloaga said. Speech inputs include things like variation in tone or pauses.

HireVue transcribes interviews and analyzes the candidates’ responses to questions for possible matches with job descriptions, she said. The company also provides an AI Explainability Statement to corporate customers and to job candidates.

In recent months, HireVue has been sued for collecting facial recognition data from candidates without notice nor consent, in violation of an Illinois law, the Biographical Information Protection Act. 

Some states have laws that regulate the use of video for screening job applicants, Willner noted. “There are legal risks in those states that can be addressed by compliance with the specific statutes,” he said.

Letting AI screen video interviews and reject candidates

Video interviews can be reviewed by humans or by AI to weed out unqualified candidates. Either way is lawful most of the time, according to Willner. 

“The rationale behind having AI make the assessment is that it does tend to move the potentially biased human being from the process,” Willner said. “Then you can take steps to eliminate or reduce the bias that may come into the source material for AI.” 

Research shows that using structured interviews and objective ways of assessing the answers in those interviews is an effective way of reducing bias in the process and improving the reliability and validity of the results, Willner said. 

“But the risks of bias being imported into the process from whatever the AI was built on does call for some steps to try and address and mitigate those risks so you can get the best possible result,” he said. 

The bottom line is, companies have to be careful using any of these technologies.

“AI is like anything,” Sturdivant said. “It can be made for good, but if you don't watch what you're doing, you can really mess things up and cause a lot of problems.” 

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Technology
MORE FROM AMERICAN BANKER