On Jan, 21 2020
In 2017, Amazon’s use of AI recruiting famously came to an abrupt halt when engineers discovered the platform was unintentionally discriminating against female candidates. The AI, after discovering most of the resumes accepted by Amazon over a 10 year period belonged to male candidates, evidently decided that female candidates were presumably less qualified for positions and began downgrading resumes with any mentions of women, such as women’s colleges.
Soon after, Amazon rectified the bias, but recalled the AI at a later date regardless.
In an industry largely dominated by men (three-fourths of all technical talents are male, a number that grows even higher when looking at talents in top companies), it isn’t too surprising that the AI would come to associate male candidates with higher qualifications.
Amazon’s reasons for recalling the AI — “there was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory” — reveals a pressing problem in traditional recruiting methods. For decades, unconscious bias has plagued even the most well-meaning of recruiters, resulting in capable candidates being passed over unknowingly.
AI is not inherently discriminatory by itself. However, machine learning can detect biases in the hiring system that we ourselves may never notice. A 2017 analysis by the University of Toronto and Ryerson University found that candidates with “Asian-sounding” names were 28% less likely to score an interview with a company, with internal biases appearing in recruiters that such applicants were more likely to have “heavy accents and language problems”. AI may thus develop similar biases, associating candidates with Anglo-centric names with articulation and better communication skills.
So what are modern AI sourcing platforms doing to prevent this from happening again?
Some companies are having AI start on a so-called “blank slate”, where factors such as race, ZIP code, and gender are excluded from the hiring process. Instead, only ideal candidate qualities are listed and processed by the AIs. Financial software have already led the race when it comes to this; banks are legally required to prove that their software does not discriminate based on gender, race or ethnicity.
However, some research has suggested that this is not enough. A team from Amazon found that despite designing AI to focus solely on job functions, the technology still “favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as ‘executed’ and ‘captured’”.
In order to create a more diverse workplace environment, a variety of different solutions have been proposed and tested. While the technology for bias-free sourcing may not be completely perfected yet, there are other ways to utilize machine learning to eliminate bias. Some firms have begun actively enlisting AI algorithms that help remove bias in job postings and other forms of recruitment media, which in turn encourages more marginalized candidates to apply for jobs they might have otherwise felt alienated from. Other platforms focus on the potential of applicants rather than their pedigree, ensuring that candidates with high performance aptitude but may not have attended an elite university are still able to compete with more “qualified” candidates.
The push for “blind recruiting” has successfully resulted in more diverse candidate pools and workplaces than ever. While the journey to perfectly unbiased recruiting may be a whiles away, it’s clear that we’ve come a long way nonetheless.