AI hiring platform Eightfold was sued in California state court this week, the latest example of a legal challenge questioning the use of AI in hiring practices.
The proposed class action, filed on behalf of job seekers who used the platform’s tools—or were evaluated by employers that did—charges that candidates were not properly notified of how their information was being used. According to a report in Reuters, this marks the first case of its kind to allege violations of the Fair Credit Reporting Act by an AI hiring tool.
The latest AI in hiring lawsuit
The suit centers on the predictive nature of Eightfold’s tool, particularly the “dossiers” Eightfold culls about job candidates from online data. Plaintiffs contend applicants have no knowledge of these reports, nor are they given the opportunity to review or dispute the findings within, which are fed into Eightfold’s proprietary AI to predict a candidate’s likelihood of success in a particular role.
The two named plaintiffs charge that the technology put them at a hiring disadvantage when they sought positions at Eightfold clients, including Microsoft and PayPal. According to Eightfold, it counts about one-third of the Fortune 500 among its clients.
According to the complaint, plaintiffs say Eightfold is “collecting personal data such as social media profiles, location data, internet and device activity, cookies and other tracking, to create a profile about the candidate’s behavior, attitudes, intelligence, aptitudes and other characteristics that applicants never included in their job application.”
In a statement to Reuters, however, an Eightfold spokesperson emphasized that, to build the datasets the AI analyzes, the company does not “scrape social media and the like.”
“We are deeply committed to responsible AI, transparency and compliance with applicable data protection and employment laws,” Kurt Foeller told the outlet.
The topic of responsible AI gets considerable coverage on Eightfold’s website. The platform describes that it regularly conducts AI audits, monitors new regulations, handles data “with a focus on privacy,” and operates an AI Ethics Council, among other strategies.
AI for HR: growing legal risk?
The Eightfold suit highlights the increasingly complex legal landscape facing employers as AI regulations continue to take shape.
The use of AI in hiring is making legal waves well beyond Eightfold: HR tech giant Workday is facing its own class action regarding its AI-powered screening tools, which plaintiffs contend unfairly discriminate against older job candidates.
Sarah Smart, co-founder of HorizonHuman, recently wrote for HR Executive that “AI in HR technology is already influencing decisions that directly impact who gets hired, promoted or left behind.” Regardless of where the Workday suit ends up, she says the case is a “call to action for HR leaders. HR executives must become proficient in AI’s applications, risks and governance.”
That mandate extends far beyond the use of AI in hiring, as the tech permeates the breadth of what HR touches. For instance, just last week, Amazon asked a judge to toss out a proposed class action by employees who charge the tech giant routinely dismisses employee requests for accommodations, and that its use of AI to screen those requests is problematic.
In analyzing HR tech trends over the last year, HR Tech Chair Steve Boese recently wrote that the legal murkiness surrounding AI for HR is driving up demand for transparency from vendors.
HR is questioning vendors “more aggressively” about how their AI models work, while employees are pressing for clearer expectations around the tech’s influence and regulators are seeking real documentation, “rather than promises.”
“Trust is now a defining competitive differentiator in HR technology,” Boese writes.
The post Eightfold suit highlights the legal risks of AI in hiring appeared first on HR Executive.