Table of Contents
Quick Answer
The AI Incident Database (AIID) and OECD AI Incidents Monitor (AIM) now catalogue 3,000+ real-world AI harms. Incident data is the fastest input to responsible AI risk assessments in 2026.
- AIID maintained by Responsible AI Collaborative since 2018
- OECD AIM launched in 2024 with G7 Hiroshima support
- Incidents are used by NIST, EU AI Office, and UK AISI for scenario planning
What Are AI Incident Databases?
An AI incident is a situation where the development, deployment, or use of an AI system results in actual harm to people, property, or the environment. The AI Incident Database (incidentdatabase.ai) was launched in 2018 by Sean McGregor and the Partnership on AI. The OECD AI Incidents Monitor (oecd.ai/en/incidents) launched in 2024 and harmonises classification with OECD AI Principles.
The ISO/IEC/TR 5469:2024 and the EU AI Act Article 73 both require serious incident reporting for high-risk AI.
Key Details / Requirements
Common AI Incident Categories (AIID)
Category
Example
Bias and discrimination
Amazon hiring AI down-weighting women (2018)
Autonomous-vehicle safety
Uber ATG fatal crash, Tempe AZ (2018)
Misidentification
Robert Williams wrongful arrest (2019, Detroit)
Content moderation failure
YouTube recommending extremist content
Healthcare AI error
UnitedHealth nH Predict denials (2023 lawsuit)
Financial AI discrimination
Apple Card gender disparities (2019)
Deepfake fraud
Arup HKD 200M deepfake transfer (2024)
LLM hallucination
Air Canada chatbot liability (2024)
Copyright infringement
Stability AI Getty litigation (2023-2025)
Privacy breach
ChatGPT title-history leak (2023)
Mandatory Incident Reporting
Regulation
Trigger
Deadline
EU AI Act Art. 73
Serious incident in high-risk AI
15 days (3 days for widespread)
US state consumer-protection laws
Varies
Varies
India DPDP Act
Personal data breach
72 hours to Data Protection Board
China Generative AI Measures
Illegal content
24 hours
UK DPA 2018
Personal data breach
72 hours to ICO
Real-World Examples / Case Studies
Uber ATG (Tempe, 2018) — Self-driving prototype killed pedestrian Elaine Herzberg. NTSB investigation found operator and system design failures.
Robert Williams (Detroit, 2019) — Wrongful arrest after facial recognition misidentification. ACLU case became a reference for face-recognition moratoria.
nH Predict (UnitedHealth, 2023) — Class action alleges AI tool with 90%+ error rate was used to deny Medicare Advantage claims.
Air Canada Chatbot (BC, 2024) — Civil Resolution Tribunal held airline liable for misinformation about bereavement fares.
Arup deepfake (Hong Kong, 2024) — HKD 200M transferred after deepfake CFO video call.
What This Means for Organisations
In 2026, incident management is a core RAI capability. Teams should:
- Subscribe to AIID and OECD AIM for sector-relevant incidents
- Incorporate incident patterns into pre-deployment red-teaming
- Establish a cross-functional incident response plan (IRP)
- Report per applicable law (EU AI Act, DPDP, etc.)
- Publish post-mortems to industry peers via AIID
Compliance Checklist
- Designate an AI Incident Response Lead
- Define "incident" and "serious incident" in policy
- Integrate incident triage with existing cybersecurity IR
- Maintain a 24/7 incident reporting channel
- File to AIID and applicable regulators within mandated windows
- Conduct quarterly tabletop exercises
- Train deployers on incident recognition
FAQs
Q: What is a serious incident under the EU AI Act?
An incident that leads to death, serious health damage, serious property or environmental damage, or serious and irreversible disruption of critical infrastructure.
Q: Is AIID peer-reviewed?
Incidents are community-submitted and editorial-reviewed by the Responsible AI Collaborative.
Q: Is OECD AIM government data?
Maintained by the OECD AI Policy Observatory with government and multistakeholder inputs.
Q: Can an incident report be confidential?
Regulator reports can be confidential; AIID public entries can be submitted anonymously.
Q: How do incidents map to risk tiers?
Use incident severity (fatality, financial loss, privacy breach) to inform AI RMF MEASURE and MANAGE functions.
Q: Does the FTC require incident reporting?
No dedicated AI incident rule, but Section 5 enforcement often follows publicised incidents.
Q: How often should IRPs be updated?
At least annually, and after every incident.
Conclusion
Incident data is the cheapest risk-management input available. Read it, learn from it, and contribute.
Wire incident response into your AI stack with Misar AI's IRP template.