Why do you think there’s such contention around which AI tools or features are included in lawsuits like this one?The contention often stems from the complexity and diversity of AI technologies. Companies argue that different tools or features, even within their own ecosystem, operate on distinct algorithms or platforms, potentially leading to different outcomes for candidates. In this case, the company pushed back on including a specific AI product in the lawsuit, claiming it was fundamentally different from other tools in question. However, courts are increasingly focused on the end result—whether candidates are unfairly impacted—rather than the technical nuances of each system.How does this case reflect broader concerns about the use of AI in hiring practices?This case is emblematic of a growing unease about AI in hiring. There’s a fear that these tools, while designed to streamline processes, can perpetuate or even amplify biases if not carefully monitored. Beyond bias, there are concerns about transparency—candidates often don’t know they’re being evaluated by AI, let alone how those evaluations are made. This lack of clarity raises ethical questions about fairness and consent, pushing the conversation toward stricter oversight and regulation of automated decision-making in recruitment.What are some examples of existing or upcoming regulations addressing AI in hiring, and how do they shape the landscape?We’re seeing a wave of regulatory efforts to tackle AI in hiring. New York City has been a pioneer, implementing laws in 2023 that require audits of automated decision-making tools and mandate candidate notification when such tools are used. Looking ahead, states like California and Colorado are set to introduce their own regulations by 2026, which will likely build on these principles but may add more specific requirements for accountability. These laws are shaping a landscape where companies must prioritize transparency and proactively address potential biases in their AI systems to stay compliant.What challenges do companies face when trying to comply with court orders or regulations related to AI hiring tools?Compliance can be incredibly complex. Companies often struggle with logistical hurdles, such as identifying and compiling data on which candidates were evaluated by specific AI tools, especially when systems are integrated across multiple platforms or customer bases. There’s also the challenge of balancing legal obligations with protecting proprietary information or client confidentiality. Courts have acknowledged these difficulties but often emphasize that they’re not insurmountable, pushing companies to find solutions rather than excuses.Looking ahead, what is your forecast for the future of AI in hiring and how companies can prepare for potential legal and ethical challenges?I believe AI in hiring is here to stay, but its future will be defined by a tighter regulatory framework and a stronger emphasis on ethical deployment. We’re likely to see more lawsuits and regulations as stakeholders demand greater accountability. For companies, preparation means investing in robust bias audits, ensuring transparency with candidates, and fostering a culture of continuous improvement in their AI systems. Building trust with both regulators and the public will be key—those who proactively address these challenges will not only mitigate risks but also position themselves as leaders in responsible innovation. Advertisement , Workday, Inc., N.D. Cal. Case No. 23-cv-00770-RFL, the plaintiff alleges that Workday’s popular artificial intelligence (AI)-based applicant recommendation system violated federal antidiscrimination laws because it had a disparate impact on job applicants based on race, age, and disability., The court determined that the main issue – whether Workday's AI system disproportionately affects applicants over 40 – can be addressed collectively, despite the challenges in identifying all potential members of the collective action..