Mitigate legal risks of AI in employment decisions through vendor contracts
Artificial intelligence (AI) has become commonplace in recruiting, screening, interviewing, testing, promotion, and employee monitoring. Properly designed and governed, AI can streamline processes and improve consistency. In employment decision-making, however, AI can introduce legal and operational risks for the employer, even when the AI tools are built and operated by third-party vendors. Businesses should understand where and when liabilities may arise and use vendor contracts to mitigate and allocate those risks before deploying AI as part of employment decisions.
Legal risks in using AI for employment decisions
A legal risk in using AI as part of employment decisions is that AI tools can encode or amplify historical bias. Disparate treatment claims can arise where systems use or infer protected characteristics such as age, race, religion, sex, disability, or genetic information—either directly or through proxies like geography or graduation dates. Disparate impact claims can follow when “neutral” criteria disproportionately affect protected groups and cannot be justified as job-related and consistent with business necessity, or when less discriminatory alternatives exist. You cannot avoid liability by pointing to a vendor’s algorithm as the cause of the legal violation.