States and cities limit AI use in employment decision-making
The use of artificial intelligence (AI) in employment decision-making is on the rise, with Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows stating that more than 80% of employers use this technology.
Employers can use software that incorporates algorithmic decision-making at various stages of the employee hiring process, such as resume scanning, video interviewing, and testing to provide “job fit” scores. AI, in the form of predictive algorithms, is often at the core of this software. Traditionally seen as a better alternative to the implicit bias that can permeate the hiring process when performed by humans, AI was supposed to serve as a better option for neutral hiring practices.
Federal discrimination concerns
Through its use, however, deficiencies have arisen with respect to eliminating bias. As McAfee & Taft labor and employment attorney Elizabeth Bowersox discussed in her previous article last May, the EEOC and the U.S. Department of Justice (DOJ) have already issued guidance cautioning employers about the intricacies of using AI in employee hiring and ensuring compliance with the Americans with Disabilities Act (ADA).
Because bias (and, in turn, the potential for discrimination) against applicants due to race, national origin, gender, age, and other protected characteristics may arise based on the data used to create the algorithms that employers then use for hiring decisions, AI’s shortcomings aren’t limited to ADA concerns.