The Algorithm Denied Your Loan. Good Luck Suing.
Why America's laws can't handle AI harms — and what a fix actually looks like (The computer says no!)
In August 2023, the EEOC announced a $365,000 settlement with iTutorGroup, a Shanghai-based online tutoring company. The company’s hiring software had been programmed to automatically reject female applicants over fifty-five and male applicants over sixty. More than two hundred people were screened out. The practice was discovered only because one applicant, suspicious after being rejected, submitted two identical applications with different birth dates. Only the younger version got an interview.
What made the case remarkable wasn’t the discrimination. It was the rarity. Of the thousands of companies using automated hiring tools across the country, iTutorGroup was one of a handful to face any federal enforcement at all. The EEOC, the agency responsible for policing employment discrimination, has been operating on a budget that’s been essentially flat in real terms for over a decade. It can’t systematically monitor the explosion of AI-driven hiring. Nobody can.
This isn’t just a hiring problem. It’s a governing problem. And it touches nearly everything.
The Vacuum
Right now, the United States has no comprehensive federal AI statute. None. The agencies that touch various corners of AI — the FTC for consumer protection, the FDA for medical devices, the CFPB for lending — are working with statutes drafted decades before modern machine learning existed, and they’re doing it with shrinking budgets and shifting political winds.
Meanwhile, AI systems are making or influencing decisions about who gets hired, who gets a loan, who gets medical treatment, who gets paroled, and who gets an apartment. The gap between the scale of these decisions and the scale of government oversight is staggering.
Consider what happened in Minnesota. In late 2023, a class of plaintiffs sued UnitedHealth Group, alleging that its AI tool called nH Predict — built by subsidiary NaviHealth — was systematically denying post-acute care coverage to elderly Medicare Advantage patients. The complaint alleged the algorithm overrode physician recommendations at extraordinary rates and carried what plaintiffs characterized as roughly a ninety-percent error rate, measured by how often denials were ultimately reversed on appeal. UnitedHealth kept using it, the plaintiffs alleged, because so few patients actually appealed that the cost savings from denied claims outweighed the cost of the reversals. UnitedHealth disputes these allegations, calling the tool merely “a guide.” No federal agency stepped in. The private lawsuit was the only enforcement mechanism in sight.
Or consider algorithmic tenant screening. Researchers have documented that tools aggregating criminal records, credit reports, eviction histories, and social media to generate “risk scores” for renters systematically disadvantage Black and Latino applicants, applicants with disabilities, and domestic-violence survivors. The CFPB issued guidance in 2022 — not about tenant screening specifically, but about adverse-action notices for credit decisions involving algorithms. Separate reminders about tenant screening followed later. No enforcement action materialized. The primary mechanism for challenging these practices has been private litigation, with mixed results.
This pattern — real harms, absent regulators, private plaintiffs scrambling to fill the gap — is playing out across every domain where AI touches consequential decisions. And the plaintiffs stepping into the breach are armed with legal tools that were never designed for this fight.
Why Existing Law Keeps Breaking
I recently published an article in the Georgia Law Review that maps the full landscape of private AI enforcement: what’s working, what’s failing, and what a real fix would look like. The core finding is blunt: every existing legal doctrine captures some AI harms in some contexts, but each one breaks down when it confronts the features that make AI systems genuinely different from the things law was built to handle.
Those features are worth naming, because they’re not just academic abstractions. They’re the reason your current legal rights are effectively decorative when an algorithm harms you.
Opacity. Modern AI systems — deep neural networks with billions of parameters — arrive at decisions through mathematical operations that don’t map to human-readable reasoning. When a hiring algorithm rejects you, neither you nor the company that deployed it, nor in many cases the engineers who built it, can give a complete account of why. Tort law needs you to prove causation: the defendant did this, which caused that. Employment discrimination law needs you to identify the specific “employment practice” that caused a disparity. Products liability needs you to identify a “defect.” All of these doctrines assume somebody can open the hood and explain the mechanism. With AI, the hood is welded shut.
This isn’t a hypothetical problem. Under Title VII’s disparate-impact framework, a plaintiff challenging a biased hiring algorithm needs to isolate the specific employment practice responsible for the disparity and then demonstrate that an alternative practice would achieve the same business objectives with less discriminatory impact. When the “practice” is a neural network processing hundreds of variables through millions of nonlinear operations, identifying the specific mechanism — let alone proposing a less-discriminatory alternative — may be literally impossible. The opacity doesn’t just raise the bar for plaintiffs. It eliminates the bar and replaces it with a wall.
Distributed causation. An AI system that harms you probably wasn’t built by one company. There’s a data provider, a foundation-model developer, a fine-tuner, a deployer, and maybe others. When the harm occurs, each actor points at the others. The training data was biased — blame the data provider. The model was fine-tuned poorly — blame the fine-tuner. The deployer used it in the wrong context — blame the deployer. Traditional law expects a plaintiff to identify the responsible party. AI supply chains are specifically designed to make that impossible.
Think of it this way. When a defective car injures someone, the plaintiff sues the manufacturer. The supply chain is linear and traceable. But when a medical AI misdiagnoses a patient, the harm may result from training data curated by Company A, a foundation model built by Company B, fine-tuning performed by Company C, and a deployment decision made by Hospital D. The patient doesn’t know which link in the chain failed. Neither, possibly, does anyone else. And each defendant has a powerful incentive to point the finger down the line.
Scale. A single biased algorithm deployed by a major employer screens out thousands of qualified candidates before anyone notices a pattern. A recommendation engine amplifying harmful content reaches hundreds of millions of users in hours. Case-by-case adjudication — the basic unit of American civil justice — cannot keep pace with harms that are systemic and simultaneous.
Emergent behavior. Large language models develop capabilities their creators didn’t program and didn’t predict. A model trained to predict the next word in a sequence may learn to write code, generate persuasive misinformation, or produce outputs that harm people in ways nobody anticipated. Legal doctrines built on foreseeability — negligence, products liability, failure-to-warn — don’t work well when the defendant couldn’t have foreseen the harm either.
Rapid evolution. AI capabilities are changing faster than any legal regime can adapt. Tort law evolves through case-by-case adjudication measured in decades. Statutes are fixed at enactment. Any framework that’s specific enough to be useful today may be obsolete by the time it’s actually applied.
Put these together and you get the enforcement equivalent of bringing a sword to a drone fight. Tort law can’t get past the causation barrier. Products liability can’t decide whether AI is a “product” — software has traditionally been classified as a service or information, not a tangible good, and courts have split on whether algorithmic outputs qualify as “products” at all. Contract law is neutralized by the mandatory-arbitration clauses and liability waivers buried in every terms-of-service agreement you’ve ever clicked through — and in any event, contract claims don’t help the third parties harmed by AI systems, because they have no contractual relationship with the AI company. Consumer protection statutes are the most promising tool, but they require standing as a “consumer” in a “consumer transaction,” and many AI harms affect people who have no direct relationship with the AI company at all. Intellectual property protects creators, not the public. Civil rights statutes require you to prove discriminatory intent or identify the specific practice causing the disparity — requirements that opaque algorithms are engineered to frustrate.
Illinois’s Biometric Information Privacy Act — BIPA — is the one bright spot, and it’s instructive. BIPA imposes specific obligations on companies that collect biometric data, provides a private right of action with per-violation statutory damages, and has generated enormous enforcement activity, including the landmark litigation against Clearview AI. But BIPA covers biometric data only. The vast majority of AI harms — discriminatory hiring, wrongful insurance denials, algorithmic tenant screening, manipulative recommendation engines — fall entirely outside its scope.
The formal availability of legal remedies masks a functional enforcement gap. You technically have rights. You practically can’t use them.
Why “Just Regulate It” Isn’t Enough
A natural response is: fine, fix public regulation. Pass a federal AI law. Fund the agencies. Problem solved.
I wish. Public enforcement faces three structural constraints that make it an insufficient standalone response.
First, there’s a capacity problem. The FTC has about 1,100 employees and a $400 million budget — for all of consumer protection and competition enforcement, not just AI. The EEOC is stretched even thinner. The FDA lacks clear statutory authority over many AI-enabled medical tools. The CFPB has signaled interest in algorithmic discrimination but faces the same budget constraints and, depending on the administration, active political opposition to enforcement. These agencies can bring a handful of high-profile AI enforcement actions per year. They cannot systematically police the thousands of companies deploying algorithmic decision-making across every sector of the economy.
The comparison to other domains is instructive. When Congress concluded that private securities fraud exceeded the SEC’s enforcement capacity, it didn’t just give the SEC more money. It empowered private plaintiffs. When environmental pollution outstripped the EPA, Congress enacted citizen-suit provisions in every major environmental statute. When consumer fraud proliferated beyond what the FTC could handle, states created “little FTC Acts” with private rights of action. In each case, Congress recognized that public enforcement alone would never be sufficient and designed private enforcement as its complement. For AI, neither step has happened.
Second, there’s a knowledge problem. AI harms are diffuse, individualized, and often invisible. A biased hiring algorithm silently rejects candidates who never learn why. A discriminatory credit model denies applications in a pattern visible only in aggregate. A recommendation algorithm gradually radicalizes users through an incremental process that neither the user nor an outside observer can easily detect. The affected individuals are often the only people in a position to notice something went wrong. This is the classic justification for private enforcement: when harms are dispersed and invisible to centralized authorities, private parties serve as the detection system that regulators can’t be.
The knowledge problem is compounded by speed. By the time a federal agency identifies, investigates, and brings an enforcement action against a harmful AI practice — a process that typically takes months or years — the technology may have evolved beyond the regulator’s understanding. Private litigants, who experience harms in real time and can file suit without the bureaucratic delays inherent in agency action, are structurally better positioned to respond to rapidly changing AI capabilities.
Third, there’s a political economy problem. Technology companies are among the most powerful lobbying forces in American politics. They spent over $100 million on federal lobbying in the 2023–2024 cycle, with AI regulation high on the agenda. The revolving door between tech companies and regulatory agencies is well-documented. Enforcement intensity varies dramatically across administrations — the CFPB’s enforcement posture, for example, has swung wildly depending on who’s running it. There is no reason to think AI enforcement would be different.
Private enforcement is not immune to political influence — courts are shaped by judicial appointments, legislatures can strip private rights of action, and procedural barriers can be erected to discourage litigation. But private enforcement is more resistant to capture than public enforcement because it’s decentralized, initiated by diverse actors with diverse motivations, and adjudicated by courts that are more insulated from direct political pressure than agencies subject to presidential control. In a domain as politically charged as AI, that relative insulation matters enormously.
None of this means public enforcement is unimportant. It means public enforcement alone is insufficient. Private enforcement isn’t a second-best alternative; it’s a structurally necessary complement.
The State Patchwork
Some states have started to act, and their efforts are informative — both for what they get right and what they miss.
Colorado’s AI Act, signed in 2024, is the most ambitious state-level attempt. It establishes risk-tiered duties for developers and deployers, requires impact assessments and bias testing for high-risk systems, and mandates that consumers be notified when AI plays a role in consequential decisions. It’s a solid framework. But Colorado relies primarily on enforcement by the attorney general, with no private right of action for affected individuals. That means everything depends on the capacity and priorities of a single office — exactly the bottleneck that has crippled federal enforcement.
Illinois took a different path with BIPA, which has generated more private enforcement activity than any other AI-adjacent statute in the country. BIPA’s design — specific obligations, per-violation statutory damages, and a private right of action — has proven devastatingly effective. The Clearview AI litigation, which challenged a facial-recognition company that scraped billions of photos from the internet without consent, would never have happened through public enforcement alone. But BIPA only covers biometric data. If the algorithm that harmed you was making credit decisions or screening job applicants or denying insurance claims, BIPA doesn’t help.
Other states are moving in various directions — some focused on deepfakes, some on automated employment decisions, some on algorithmic transparency. The result is an emerging patchwork: different states covering different harms with different mechanisms, leaving enormous gaps and creating compliance headaches for companies operating nationally. What’s needed isn’t another piece of the patchwork. It’s a comprehensive template that any state can adopt.
A Model Act
The Georgia Law Review article proposes a concrete, adoptable statute: the Private AI Accountability Act (PAIAA). It’s a model state law, designed to be introduced in any state legislature, that takes the enforcement architecture that works in other domains and calibrates it to AI’s specific challenges.
Here’s what it does.
Risk-tiered duties. Not all AI is created equal. An algorithm recommending movies is different from one deciding who gets chemotherapy. The PAIAA classifies AI systems by risk tier, concentrating the heaviest obligations on “high-risk” systems — those used in employment, healthcare, criminal justice, housing, credit, and education. Developers and deployers of high-risk systems must test for bias, document their systems’ limitations, disclose when algorithmic decision-making is being used, and provide affected individuals with the ability to appeal to a human.
A causation presumption. This is the big one. When a plaintiff shows they were exposed to a covered AI system, suffered the kind of harm the system is capable of causing, and the harm occurred in a timeframe consistent with that exposure, the burden shifts. The defendant — the entity that built or deployed the opaque system — must prove the system didn’t cause the harm. This flips the fundamental dynamic. Right now, opacity is an asset: the more inscrutable your algorithm, the harder it is for anyone to prove it hurt them. Under the PAIAA, opacity becomes a liability. If you can’t explain your system, you bear the consequences of that silence.
Joint and several liability across the supply chain. If your AI system harms someone, they can recover from any entity in the chain — the data provider, the developer, the fine-tuner, the deployer. This eliminates the “pointing fingers” defense and creates powerful incentives for mutual monitoring. Developers will care whether deployers are using their systems responsibly, because they’re on the hook if they don’t. Deployers will demand better documentation from developers, because they’re on the hook if the system is defective.
Statutory damages with teeth. Plaintiffs can elect statutory damages — $1,000 to $5,000 per negligent violation, $5,000 to $25,000 per knowing or reckless violation — instead of having to prove actual damages. Each consequential decision made by a covered AI system in violation of the act with respect to a distinct individual is a separate violation. In class actions, damages are calculated per class member, with a judicial safety valve for aggregate awards that would be constitutionally disproportionate.
One-way fee-shifting. A prevailing plaintiff recovers attorney’s fees. A prevailing defendant recovers fees only if the plaintiff’s claim was frivolous or brought in bad faith. This is the mechanism that makes private enforcement economically viable for individual plaintiffs who can’t afford to finance litigation against well-resourced AI companies. It’s the same mechanism Congress uses in civil-rights statutes, and it works.
A safe harbor that rewards responsibility. This is the provision that answers the “innovation chill” objection. Companies that can demonstrate good-faith compliance with the act’s testing, documentation, and monitoring requirements get a concrete litigation advantage: the causation presumption doesn’t apply to them, they’re insulated from statutory damages, and their exposure is limited to actual compensatory damages. The statute doesn’t punish capability. It punishes opacity, negligence, and the failure to do the process work that competent AI developers already do. Companies that invest in responsible practices get rewarded. Companies that ship untested algorithms into consequential decisions without safeguards bear the full weight of the statute’s remedies.
An adaptability mechanism. Because AI changes faster than legislatures can act, the PAIAA includes an administrative mechanism for updating risk classifications and compliance benchmarks through notice-and-comment rulemaking. The statute’s core obligations are technology-neutral and functionally defined, so they don’t become obsolete when the technology evolves.
Protection against contractual evasion. The act’s substantive rights — including statutory damages, the causation presumption, fee-shifting, and the right to proceed as a class — cannot be prospectively waived by contract. Not by an arbitration clause, not by a limitation-of-liability clause, not by a forum-selection clause, not by any other contractual device. This is critical because AI companies, like every other industry with sophisticated legal counsel, would otherwise bury liability waivers in terms-of-service agreements that nobody reads.
The act is designed with constitutional constraints in mind. Its obligations attach at the point of deployment — when an AI system is used in a consequential decision within the enacting state — not at the point of development. This deployment-focused trigger means companies can’t avoid compliance by incorporating elsewhere. It also means the act stays within established personal-jurisdiction, dormant Commerce Clause, and extraterritoriality boundaries.
The Objections (and Why They Don’t Hold)
I anticipated three main pushbacks.
“This will kill innovation.” It won’t, for the same reason products liability didn’t kill the pharmaceutical industry, environmental citizen suits didn’t kill the chemical industry, and private securities enforcement didn’t kill Wall Street. The safe harbor ensures that companies investing in responsible practices face minimal litigation risk. What the statute deters isn’t innovation — it’s negligence. Deploying untested, unaudited AI in high-stakes decisions without safeguards isn’t innovation. It’s corner-cutting, and deterring it is the statute’s intended effect.
“There will be a litigation explosion.” The empirical evidence from BIPA — the closest analogue — suggests that the volume of litigation reflects the volume of violations, not an excess of frivolous claims. BIPA has driven measurable compliance improvements in biometric-data handling. The PAIAA’s risk-tiered structure narrows the plaintiff pool to people harmed by high-risk systems, the safe harbor reduces exposure for compliant companies, and the one-way fee-shifting provision deters strike suits while preserving meritorious claims.
“State-by-state regulation is unmanageable.” The alternative to state-by-state regulation isn’t federal uniformity — it’s no regulation at all, because Congress isn’t acting. A model act that multiple states adopt in substantially similar form produces convergence over time, the same way it has in consumer protection, data privacy, and commercial law. The deployment trigger limits regulatory arbitrage: AI companies must comply wherever they deploy, regardless of where they’re headquartered. And the adaptability mechanism allows state agencies to harmonize their classifications incrementally.
There’s a more sophisticated version of the innovation objection that focuses on competitive asymmetry — that states with aggressive enforcement will drive AI development elsewhere. But the evidence from the GDPR is instructive. The regulation was supposed to cripple European tech. Instead, European technology continued to grow, compliance-technology industries emerged as new innovation sectors, and the “Brussels effect” drove global convergence toward higher standards. Stringent regulation tends to become the baseline, not the outlier.
What’s Really at Stake
Here’s the bottom line. The United States is conducting an unprecedented experiment: deploying the most consequential technology since the internet across every domain of social and economic life while maintaining an enforcement vacuum of historic proportions.
AI systems are deciding who gets hired, who gets healthcare, who gets housing, who gets credit, and who goes to prison. The people affected by these decisions — disproportionately the people with the least power — currently have no reliable legal mechanism to challenge them. The doctrines they’d need to invoke were designed for physical products, human decision-makers, and transparent processes. None of those assumptions hold.
Think about what this means concretely. A sixty-year-old woman applies for a job and gets an automated rejection. She suspects age discrimination, but she can’t prove it because the algorithm is opaque. She can’t sue under tort law because she can’t establish causation. She can’t sue under Title VII because she can’t identify the specific employment practice that screened her out. She might have a claim under a state consumer-protection statute, but she’d need to show she was a “consumer” in a “consumer transaction” — and she wasn’t buying anything, she was applying for work. She signed a terms-of-service agreement to use the application portal that includes an arbitration clause and a liability waiver. Her legal options, on paper, are extensive. In practice, they’re worthless.
Now multiply that by millions. That’s the enforcement landscape for AI harms in America in 2026.
Private enforcement has always been the American fallback when public institutions fail to protect the public. It’s how we police securities fraud, environmental violations, consumer deception, and civil-rights violations. The question for AI is not whether private enforcement will play this role — it already does, badly. The question is whether we’ll leave it improvised or make it work by design.
The Private AI Accountability Act is a concrete answer. It’s a framework that takes the enforcement mechanisms proven in other domains — statutory damages, fee-shifting, class actions, burden-shifting presumptions — and calibrates them to the specific features that make AI harms different. It rewards companies that invest in responsible development and holds accountable those that don’t. It’s designed to be adopted by state legislatures tomorrow, not after Congress decides to act.
The safe harbor is the key. This isn’t a regime that punishes building powerful AI. It’s a regime that punishes deploying powerful AI irresponsibly — without testing, without documentation, without transparency, without accountability. If you do the work, you’re protected. If you don’t, you’re exposed. That’s not anti-innovation. That’s the minimum expectation of a society that’s decided to let algorithms make life-altering decisions about its citizens.
The states that move first on this will set the template. The history of American regulation — from consumer protection to environmental law to data privacy — shows that well-designed state legislation doesn’t just protect the enacting state’s residents. It creates a gravitational pull. Companies comply with the most demanding standard because it’s cheaper than maintaining different practices for every jurisdiction. The first movers don’t disadvantage themselves. They set the floor for everyone.
We don’t need to wait for Congress. We don’t need to wait for a federal agency to get the budget and the mandate and the political will to police AI. We need enforceable private rights, calibrated to the actual features of the technology, available to the actual people being harmed. That’s what the PAIAA provides.
The full article, including the complete model legislative text, is available in the Georgia Law Review.
This post is adapted from “Private Enforcement of AI Harms: Tort, Contract, or Something New?”, forthcoming (2026).



Important article. Really enjoyed this, very enlightening. Age based discrimination harms White applicants more than any other cohort as 50% of the applicants in many western jurisdictions that are under the age of 30 are nonwhite. Over the age of 30 it is an increasing percentage of Whites. The AI hiring systems will accelerate White erasure. This is deeply concerning.
All good points regarding AI dangers and legal remedies, but I am going to push back a little just for fun. Most of the examples cited above are some variant of AI discrimination on the basis of age or race. There is a solid argument that the civil rights laws that forbid discrimination are, in fact, unconstitutional violations of freedom of association. I suspect the market will provide a solution (companies explicitly seeking out customers likely to face discrimination by AI or otherwise).