Table of Contents
Quick Answer
AI liability in 2026 spans contract, tort, product liability, and professional negligence. The revised EU Product Liability Directive (Directive (EU) 2024/2853) now explicitly covers software and AI, while the AI Liability Directive proposal was withdrawn from the Commission's 2025 work programme — leaving national tort law to fill the gap.
- Developers, deployers, and users can all face liability
- Product liability now covers software and AI under EU PLD 2024
- Professional users (doctors, lawyers, engineers) remain accountable for AI-assisted decisions
What Is AI Liability?
AI liability concerns who pays when AI causes harm. Harm types include bodily injury, property damage, pure economic loss, discrimination, privacy violation, and intellectual-property infringement. Causation is often contested because AI outputs are probabilistic and value chains are long (foundation model provider to integrator to deployer to user).
Key Details / Requirements
EU Product Liability Directive 2024 (Directive (EU) 2024/2853)
Feature
Detail
Published
18 November 2024 (OJ)
Applicable from
9 December 2026
Covers
Software, AI, digital services, digital manufacturing files
Defect presumption
Possible when scientific/technical complexity creates disclosure asymmetry
Disclosure obligation
Defendants must produce "necessary and proportionate" evidence
Liable actors
Manufacturer, authorised representative, importer, fulfilment service
Liability Theories Applicable to AI
Theory
When It Applies
Product liability
Defective AI product causes harm
Negligence
Reasonable-care failure in design, deployment, or oversight
Breach of contract
AI fails contract specifications
Strict product liability
EU PLD 2024, US Restatement (Third)
Professional malpractice
Doctor, lawyer, engineer misuses AI
Vicarious liability
Employer for employee AI misuse
Statutory liability
GDPR, ADA, Title VII, consumer-protection
Real-World Examples / Case Studies
Air Canada Chatbot (2024) — BC Civil Resolution Tribunal: airline liable for misinformation from AI chatbot.
Mata v. Avianca (S.D.N.Y. 2023) — Lawyers sanctioned USD 5,000 each after citing ChatGPT-hallucinated cases.
Park v. Kim (2024) — Second Circuit affirmed sanctions on lawyer citing AI-generated fake cases.
Uber ATG (2018) — Vehicle operator pleaded guilty to negligent homicide (2023); Uber settled civil claims.
Tesla Autopilot (ongoing) — Multiple US wrongful-death suits; jury verdicts split between plaintiff and defense.
Workday (N.D. Cal., pending) — Proposed class action for ADEA violations via AI hiring tools.
What This Means for AI Value Chain
Liability by Role
Role
Core Duty
Foundation model provider
Safe defaults, accurate documentation, indemnities
Deployer (SaaS)
Implement required safeguards, document choices
Professional user
Human oversight, verify outputs before acting
Consumer user
Follow T&Cs; limited downstream liability
Typical Indemnification Coverage
Provider
Customer IP Indemnity
Microsoft Copilot Copyright Commitment
Yes (for eligible customers)
Google Cloud Vertex AI
Yes (generative AI indemnity)
OpenAI Copyright Shield
Yes (ChatGPT Enterprise and API)
Adobe Firefly
Yes (IP indemnity)
AWS Bedrock
Yes (for titan and selected models)
Compliance Checklist
- Map each AI value chain: who trains, who deploys, who uses
- Draft AI-specific contracts with indemnity, warranty, and limitation of liability clauses
- Maintain incident response for bodily, property, financial, and digital harms
- For professionals (medical, legal): document human review of AI output
- Carry AI-specific insurance (e.g., Munich Re AI policies, Coalition cyber+AI endorsements)
- Comply with statutory duties (GDPR, HIPAA, Title VII, EU AI Act)
FAQs
Q: Is the AI Liability Directive dead?
Withdrawn from 2025 work programme; national tort law and the revised PLD continue to apply.
Q: Does EU PLD 2024 affect US companies?
Yes if AI products are placed on the EU market.
Q: Can lawyers use AI without liability?
Yes — but professional duties of competence and candour to the tribunal remain (ABA Model Rules 1.1, 3.3).
Q: Is AI output a product or service?
Depends. Embedded AI in hardware is typically a product; pure SaaS is a service. PLD 2024 clarifies software inclusion.
Q: Do disclaimers eliminate liability?
Rarely — consumer-protection statutes, professional duties, and PLD override unfair disclaimers.
Q: Who is liable for hallucinations?
The deployer, typically, if a reasonable person would have verified the output.
Q: Is AI insurance available?
Yes — multiple insurers offer dedicated products; premiums depend on governance maturity.
Conclusion
There is no single defendant for AI harm — there is a chain. Well-drafted contracts, documented oversight, and insurance close the gap.
Negotiate AI contracts confidently with Misar AI's AI Contracting Playbook.