Future of Talent Weekly Newsletter

Future of Talent Weekly Newsletter

Does the Legal Case Against Eightfold Have Merits?

Kevin Wheeler's avatar
Kevin Wheeler
Jan 26, 2026
∙ Paid

woman holding sword statue during daytime
Photo by Tingey Injury Law Firm on Unsplash

Why is the legal case against Eightfold novel, and what new questions does it pose?

Historically, human screening is limited by three structural features.

  • First, while often biased or inconsistent, a human recruiter can usually explain why a candidate was shortlisted or rejected.

  • Second, accountability is localized. A specific recruiter or hiring manager could be identified as responsible for the decision.

  • Third, the scale of impact is limited. A single recruiter might screen hundreds or thousands of candidates, but not millions in real time across multiple employers.

AI-based screening systems change all three conditions.

  • First: Modern AI systems infer patterns from historical data and apply correlations that may not be understood even by their designers. This creates a “black box” decision environment. A candidate rejected by an AI system cannot get a meaningful explanation beyond general statements about “fit” or “ranking scores.”

  • Second: Responsibility. With human screening, responsibility is clear. With AI, it is spread across vendors, employers, data providers, and system integrators. This produces a responsibility gap. When bias or discrimination occurs, no single person or entity can easily be held accountable. The vendor claims it merely provides a tool. The employer claims it relied on a certified system. The algorithm itself is treated as neutral. This spread of responsibility is new and legally unresolved.

  • Third: scale and automation. AI systems screen continuously across entire labor markets. Small biases in training data can be amplified. Human bias existed before, but it did not grow with this level of consistency, speed, and reach. Algorithmic bias is not just individual prejudice. It is systemic bias encoded into infrastructure.

  • Fourth: Consent. This is the most novel dimension. Historically, candidates implicitly consented to human evaluation by applying for a job. They understood that a person would read their resume.

    What they did not consent to, and often are not informed about, is automated profiling using behavioral data, inferred attributes, external datasets, and predictive scoring models that may go far beyond the information they knowingly provided.

This raises legal issues related to data protection, informed consent, and automated decision rights, especially under GDPR, the EU AI Act, and emerging US state laws.

Does the case have merit?

User's avatar

Continue reading this post for free, courtesy of Kevin Wheeler.

Or purchase a paid subscription.
© 2026 Kevin Wheeler · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture