Artificial intelligence (AI) has become commonplace in recruiting, screening, interviewing, testing, promotion, and employee monitoring. Properly designed and governed, AI can streamline processes and improve consistency. In employment decision-making, however, AI can introduce legal and operational risks for the employer, even when the AI tools are built and operated by third-party vendors. Businesses should understand where and when liabilities may arise and use vendor contracts to mitigate and allocate those risks before deploying AI as part of employment decisions.
Legal Risks in Using AI for Employment Decisions
A legal risk in using AI as part of employment decisions is that AI tools can encode or amplify historical bias. Disparate treatment claims can arise where systems use or infer protected characteristics such as age, race, religion, sex, disability, or genetic information—either directly or through proxies like geography or graduation dates. Disparate impact claims can follow when “neutral” criteria disproportionately affect protected groups and cannot be justified as job-related and consistent with business necessity, or when less discriminatory alternatives exist. You cannot avoid liability by pointing to a vendor’s algorithm as the cause of the legal violation.
Defensibility of employment decisions is complicated when the AI tool used is “black-box” – e.g., when the AI tool’s workings are not visible or understandable to users. You may be unable to articulate legitimate, nondiscriminatory reasons for adverse actions when the only explanation is a score. You may struggle to prove job-relatedness and business necessity without transparency into the AI tool’s features, training data, and model logic. If you cannot examine or reproduce how an AI tool functions and why it generated a given outcome, you risk failing the Uniform Guidelines on Employee Selection Procedures’ (UGESP) validation and recordkeeping expectations.
Additionally, disability discrimination risks are increased when AI tools impose barriers to individuals with disabilities—for example, video or voice analysis that penalizes speech, hearing, or neurological differences. The Americans with Disabilities Act (ADA) requires reasonable accommodations, accessible platforms (for example, conformance with web content accessibility guidelines (WCAG) standards), and alternative processes. Failing to provide accessible accommodations or to offer meaningful human review and individualized assessment can result in employer liability.
You should also be mindful of complying with U.S. state AI laws. Illinois’s AI Video Interview Act requires notice and consent for AI analysis of interview videos. Maryland’s facial recognition law requires consent for interviews using AI. New York City’s Local Law 144 imposes annual bias audits for automated employment decision tools, mandates public disclosure of audit summaries, and requires pre-use notices to applicants and employees. Colorado’s SB 24-205—effective February 1, 2026—regulates “high-risk” AI systems, including many employment uses, and imposes reasonable care obligations, impact assessments, risk management, and disclosure of algorithmic discrimination risks on both developers and deployers.
Although this article focuses on U.S. law, employers with global operations should note that the European Union’s (EU) General Data Protection Regulation (GDPR) imposes strict transparency, data minimization, and purpose limitation rules. The EU AI Act classifies many employment tools as “high risk,” triggering obligations around risk management, data quality, human oversight, documentation, and monitoring of the AI tool as it is in use.
Addressing Legal Risks Through AI Vendor Contracts
Contracting with AI vendors can help your business reduce, allocate, and manage risk. Agreements should be tailored to the ways in which you intend to use the AI tool, the jurisdictions in which you do business, the residences of your employees or potential employees, and your business’s risk tolerance. Vendor contracts should include mechanisms for validation of the AI tool, explainability, accessibility, privacy, and governance expectations.
Consider including the following elements in vendor contracts involving AI tools used for employment decisions:
- Vendors should provide model documentation (for example, model cards or data sheets); validation studies; bias audits and recurring bias testing; and results of risk and impact assessments.
- Reserve the right to conduct or retain a company to perform third-party audits;
- If bias arises, require immediate mitigation steps, suspension of use where appropriate, and cooperation in remediation plans;
- Limit collection and use of personal information to what is necessary; and
- Prohibit secondary use or training on the personal information supplied by the employer, unless the business provides written authorization.
In addition, vendor contracts should include provisions for:
- Predeployment validation and pilot use of the AI tool;
- Reproducibility of AI decisions made;
- Security controls;
- Immutable decision logs capturing inputs, model versions, and outputs for each decision;
- Indemnities covering regulatory investigations and violations of applicable law by the AI tool; and
- Assistance with transferring data upon termination of the agreement.
These are just some of the elements you should consider including in contracts with AI vendors providing AI tools that are used in employment decisions. Key priorities are making sure AI-assisted employment decisions are fair, unbiased, and can be explained if challenged and that records are retained as needed.
Elizabeth Shirley is an attorney with Burr & Forman LLP and can be reached at bshirley@burr.com.

