In a previous post, I talked about how Amazon shut down their AI-guided recruiting experiment because they could not successfully drive out bias, for both the learning data and the programmers.
Since then, I’ve had several conversations on this topic, attempting to determine if this was possible at all – even if you had developed software which later developed unbiased software, would we be able to eliminate bias? How many levels would we need to go before we can say that we eliminated bias?
What if we did eliminate bias, but we didn’t think it was, because of the output of the system? Even with a fully transparent AI picking candidates, would we ever get to a fully unbiased system?
Maybe, but I think that the only way we could do that is to allow the system to make the decisions and eliminate the human from the process. We all come with inherent biases that we can never eliminate – would it be ironic or not that the only way that we can eliminate bias from the system is if we exclude any humans from the decision-making process?
If the decision on a candidate was made purely through a data-driven approach, would that make it bias-free? To accelerate the elimination of bias, a startup which can promise this by using pure AI and no human interaction, would be a boon to the diversity and inclusion groups within enterprises. But would they trust the results of the machine, even if proven bias-free?
To break this pattern, maybe the entire talent acquisition process needs to be revisited – for example, is a resume the best indicator of a person’s skills? Is a coding quiz the best way to test an excellent coder who is terrible at tests? Is an interview the best way to determine if a candidate is the best choice for the role? Is the entire process a throwback from the 20th century which needs complete disruption to be more productive? Are there new models of talent acquisition which will provide better results, or are there too many of us stuck in the rut of the traditional ways of doing things? Is it possible that apprenticeship models, such as those who were born in the middle ages, might be something that we should revisit?
The reality is that the talent recruiting industry using AI to help to select candidates based on resumes is not doing more than patching a hole in the Titanic.
Maybe we need entirely new models of engagement with potential candidates, using AI to scour the internet for non-resume content to prove a candidate fit for a team. Instead of using AI as an enhancement to the current system, let endeavor to use it to disrupt the system thoroughly, and get it back to its original vision – finding the absolute best candidate for the job, no matter who they are.
After all, should hiring the perfect candidate for a role depend so firmly on a resume and an interview? Aren’t there more effective ways to find the right fit?