Discriminating droids: what employers should know about artificial intelligence
A growing number of employers are turning to artificial intelligence (AI) to help with selecting the best job candidates. Although it can make the decisions easier by reducing the amount of work required to find a great employee, some commentators are increasingly concerned about the potential for discrimination or disparate outcomes as a result.
How AI works
Although some AI programs may sound like science fiction, companies are already using them. Here are some examples:
- Some online systems search through social media profiles for desirable characteristics to identify job candidates.
- Others use keyword searches of resumes or more complex evaluations to compare and rank the materials candidates submit as part of their application.
- Rather than conducting screening interviews in person, some companies are using chat bots for the initial screening contact or recording and relying on AI programs to analyze video of a candidate answering interview questions.
Real-world examples of discrimination in automated systems
It might seem counterintuitive that turning your hiring decisions over to a seemingly neutral and bias-free computer system could lead to discriminatory outcomes, but the programs aren’t perfect. They are developed and trained by humans who may have unconscious biases that the artificial intelligence system “learns” and applies.