How hiring software can be biased (and what to do about it)
4 mins, 34 secs read time
We’ve all seen the role that artificial intelligence (AI) can play in our daily lives. It often seems innocuous. You go to type a question into Google and it autocompletes. Sometimes Google gets it right and sometimes it doesn’t. If Google gets it wrong, you simply have to finish typing out the rest of your question. Maybe it takes a few extra seconds of your day, but it’s no big deal.
When it comes to hiring software, though, the consequences of AI getting it wrong are much more profound. It can be the difference between a candidate’s resume landing on a recruiter’s desk, leading to an interview and possibly a job offer – or ending up in the rejection pile before a human has ever seen it.
At Open Forum 2021, several talent professionals and thought leaders gathered to explore the role of bias and ethics in hiring software. Greenhouse President and Co-founder Jon Stross moderated a panel discussion with John Sumser, Founder and Principal Analyst at HRExaminer, Mona Khalil, Senior Data Scientist at Greenhouse, and Riham Satti, CEO and Co-founder at MeVitae. Find the highlights from their conversation below or watch the on-demand recording here.
Why bias exists in technology
It’s tempting to think that technology is neutral and it can help us overcome bias. But the truth is that AI is just as susceptible to bias as the humans who build it. Mona explains: “When we talk about AI, we’re talking about models that are built based on pre-existing data. An algorithm that makes a decision for you doesn’t come out of nowhere – it’s trained based on a set of previously existing data that contains records of human decisions.”
In the case of a model that tells you whether a candidate is qualified, it will look at past hiring decisions. Since these decisions can be incredibly biased, an algorithm that’s trained on them will be recreating and potentially amplifying those biases.
Why are we building these biases into our software, even if it’s unintentional? Riham reminds us that biases are mental shortcuts we all make. “Our brain has to process different information and stimuli, so it puts things into buckets to be able to process more quickly, and this is where biases come in.” There are over 140 cognitive biases – like confirmation bias and the halo effect – that impact our decision-making. “We can never remove biases from the human brain, but what we can do is try to mitigate or postpone them as much as possible,” says Riham.
What you need to know about hiring software AI
The sophistication of AI in hiring software today is like that of early cars, explains John. When cars were first designed, there were no seatbelts or airbags. Over time, we learned how to make cars safer. With AI in HR technology, we need to ensure we’re considering all candidates fairly and the people we’re selecting are right for our organization. “The technology is too primitive to allow any decision about a human being to be made by the technology yet,” says John. This is why John recommends taking the machine’s opinion as one input and not basing your entire decision on it.
That doesn’t mean you can’t use AI – it just means spending time to understand how it works. “You have to audit your technology,” says Riham. “You want to understand what kind of data you’re using when a machine makes a decision and why it’s making the decisions it is.”
Tips for working with AI software vendors
“There are a number of tools that allow you to audit your models for fairness among different groups,” says Mona. They add that it’s becoming easier for vendors to conduct these fairness audits and share the results. Here are a few of the questions you might look into or ask your vendor:
Are you regularly auditing your data across different groups (especially vulnerable groups like gender, racial and ethnic minorities, people who speak a different first language and people who are applying from different parts of the world)?
What data did you use to build your model and train it to begin with?
Do candidates from different demographic groups have similar outcomes with the model?
Are there groups that become disadvantaged after a matching algorithm is run on them?
What’s next?
Looking to the future, we see legislation that may also impact how companies can use AI. The European Union is considering policies similar to GDPR that could allow candidates to request which information is being processed and how it’s being used. This may also give candidates the ability to contest hiring decisions and submit additional information to be reconsidered.
The New York City Council is also considering a bill that would require companies to disclose all the algorithms and automated systems used to evaluate a candidate in the hiring process.
Whether these pieces of legislation get passed or not, we anticipate a future that will require much more clarity and transparency around how hiring decisions are made.
The conversations at Open Forum give you the opportunity to learn from other business leaders and talent pros who have built diverse, high-performing teams. Interested in learning more from the impactful sessions at Building Belonging? Watch them here.