A proposed class action lawsuit against Workday, a human resources software firm, has brought the use of AI in hiring under the spotlight, with the U.S. Equal Employment Opportunity Commission (EEOC) weighing in on the potential for algorithmic bias.

The original lawsuit, filed by Derek Mobley, alleges that Workday's AI-powered screening software discriminated against him, based on his race, age, and mental health status, resulting in him being turned down for over 100 jobs. The EEOC has filed an amicus brief in support of Mobley, arguing that Workday may qualify as an "employment agency" subject to federal anti-discrimination laws.

While proponents of AI recruitment tools argue that they can help remove human biases and streamline the recruitment process, the Workday case underscores the potential for algorithmic bias to perpetuate and even amplify existing inequalities.

The implications of this case extend far beyond Workday itself. As the EEOC brief notes, employers who use Workday's platform may also be held liable for discriminatory hiring decisions made by the software. This raises significant questions about the responsibility of companies to ensure that the AI tools they use are free from bias and comply with anti-discrimination laws.

While the outcome of the Workday case remains to be seen, it is clear that the use of AI in hiring will continue to face increased scrutiny. As more companies adopt these tools, it is essential that they do so with a clear understanding of the potential risks and a commitment to ensuring fairness and non-discrimination.

The Workday case should serve as a catalyst for a broader conversation about the ethical implications of AI in the workplace. It is essential that we prioritise fairness, accountability, and transparency to ensure that only the benefits are shared by all, regardless of their background or circumstances.



Share this post
The link has been copied!