Artificial intelligence (AI) is transforming how companies find, evaluate, and hire talent, but it’s also raising red flags among regulators and courts. Two big developments in May 2025 show that HR teams must take a closer look at their hiring tools to avoid legal and compliance risks.
Let’s break it down.
What’s Happening in California?
California is preparing to implement new civil rights regulations that are likely to affect the use of automated decision-making systems (ADSs) in employment and other state-supported programs. These rules—expected to take effect as soon as July 1, 2025—aim to prevent discrimination based on protected characteristics like race, gender, age, disability, or religion.
While the regulations don’t ban AI tools outright, they make it unlawful to use any system, automated or not, that results in discriminatory outcomes.
What Counts as Discriminatory?
The new rules target AI tools that analyze candidates’ voices, facial expressions, personality, or availability, especially if those tools lead to biased outcomes.
Example: An AI tool that interprets not smiling during a video interview as a sign of unfriendliness could unfairly penalize candidates from cultures in which smiling less is common.
If an AI tool produces different outcomes for people in protected groups, it could violate the law, even if there’s no intent to discriminate.
What About the Workday Lawsuit?
At the same time, a major collective action lawsuit against Workday, a popular HR tech provider, is moving forward in federal court. The claim? That its AI-powered hiring software discriminated against applicants over age 40.
The lawsuit is led by Derek Mobley, a Black man over 40 with anxiety and depression. He says he applied to 100+ jobs using Workday’s systems and was rejected every time.
On May 16, 2025, a judge ruled that his age discrimination case can proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), potentially involving hundreds of thousands or even millions of jobseekers.
The case is a wake-up call for employers: Even if you didn’t build the AI tool yourself, you can still be liable for the discriminatory impact of third-party algorithms used in your hiring process.
What Should HR Teams Do Now?
Regardless of whether you’re in California, these developments show that AI compliance is now an HR priority. Here’s your action plan:
1. Review your tools. Audit your hiring systems, especially those involving AI. Do they analyze résumés, screen video interviews, or give “fit scores”? If yes, ask for proof they’re bias-tested.
2. Demand transparency from the vendor. If you use third-party platforms like Workday, ask for:
· Documentation of bias testing,
· Clear explanations of how decisions are made, and
· Contracts that protect you from legal risk.
3. Keep a human in the loop. Don’t let AI make the final call. Ensure someone in HR reviews and can override automated decisions.
4. Track outcomes. Analyze hiring data regularly. Are you seeing unexplained gaps by age, race, or gender? These could be signs of disparate impact, a legal red flag.
5. Form an AI governance team. Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing use of AI in employment.
Why It Matters
California’s regulations and the Workday lawsuit are just the beginning, even if you aren’t in California. With the federal government scaling back enforcement, states and private lawsuits are picking up the slack. This means more legal exposure for companies using AI, especially if they’re not watching closely.
HR isn’t just a user of these tools anymore. It’s now the first line of defense against AI-driven bias. AI can help you hire better and faster but only if it’s used responsibly and fairly. Take these changes seriously, get ahead of the curve, and make sure your hiring process is both efficient and equitable.
Adam Bouka is an attorney with Holland & Hart LLP in Salt Lake City, Utah, and can be reached at abouka@hollandhart.com.