Annex III lists all AI systems classified as high-risk under the EU AI Act. Learn which use cases trigger compliance obligations before August 2, 2026 enforcement.
The EU AI Act divides AI systems into four risk categories: unacceptable, high, limited, and minimal. Only high-risk AI systems face the full weight of compliance obligations — Articles 9 through 15, technical documentation, conformity assessment, CE marking, and post-market monitoring.
Whether your AI system is high-risk is determined by Annex III, a legally binding list of use cases. If your system falls into any Annex III category, you must comply with all high-risk obligations. If it does not, you may only face limited transparency requirements (Article 52) or no obligations at all.
Enforcement begins August 2, 2026. Fines for deploying a non-compliant high-risk system reach €35 million or 6% of global annual turnover, whichever is higher. This article provides the complete Annex III list, explains what each category covers, and shows how to determine whether your system is high-risk.
How Annex III Works
Annex III is not a static list. The European Commission can update it via delegated acts to add new high-risk categories as AI technology evolves. However, the current list (as of May 2026) covers eight major domains.
A system is high-risk if it meets both of these conditions:
- It falls into an Annex III category (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice), AND
- It is used as a safety component of a product covered by EU harmonized legislation (e.g., medical devices, machinery, toys) OR it is itself a product covered by that legislation.
If your system does not fall into any Annex III category, it is not high-risk under the EU AI Act, even if it poses significant ethical or social risks. The Act is use-case-specific, not capability-specific.
The Complete Annex III List
Here are all eight high-risk categories, with explanations and examples.
1. Biometric Identification and Categorization (Annex III.1)
AI systems used for biometric identification or biometric categorization of natural persons.
| Subcategory | Description | Examples |
|---|---|---|
| Remote biometric identification | Real-time or post-use identification of individuals in public spaces using biometric data (face, gait, voice) | Facial recognition at airports, police surveillance cameras, stadium entry systems |
| Biometric categorization | Classifying individuals based on biometric data to infer sensitive attributes (race, political opinions, sexual orientation, religious beliefs) | Emotion detection in hiring, ethnicity classification, sexual orientation inference |
Key point: Not all biometric systems are high-risk. Biometric authentication (unlocking your phone with Face ID) is not covered by Annex III.1 because it verifies identity, not identifies or categorizes individuals in a broader population.
2. Critical Infrastructure (Annex III.2)
AI systems used as safety components in the management and operation of critical infrastructure.
| Subcategory | Description | Examples |
|---|---|---|
| Road traffic | AI controlling traffic signals, autonomous vehicle routing, collision avoidance | Traffic management systems, autonomous vehicle control software |
| Water, gas, heating, electricity supply | AI managing supply, demand, or safety in utility networks | Smart grid optimization, predictive maintenance for power plants, water treatment control systems |
Key point: The system must be a safety component. An AI that optimizes energy costs is not high-risk; an AI that prevents blackouts or pipeline failures is.
3. Education and Vocational Training (Annex III.3)
AI systems used to determine access to educational institutions or assess students.
| Subcategory | Description | Examples |
|---|---|---|
| Admission and enrollment | AI that decides who gets accepted to schools, universities, or training programs | University admissions algorithms, scholarship award systems |
| Assessment and evaluation | AI that grades exams, evaluates student performance, or influences academic outcomes | Automated essay grading, plagiarism detection that affects grades, AI proctoring systems that flag students for cheating |
Key point: AI tutoring tools that provide feedback but do not affect grades or admissions are not high-risk. The trigger is access or evaluation, not assistance.
4. Employment, Worker Management, and Self-Employment (Annex III.4)
AI systems used in recruitment, hiring, promotion, termination, task allocation, or monitoring of workers.
| Subcategory | Description | Examples |
|---|---|---|
| Recruitment and hiring | AI that screens résumés, ranks candidates, or recommends who to interview or hire | LinkedIn Recruiter AI, HireVue video interview analysis, résumé parsing and ranking tools |
| Promotion and termination | AI that decides or influences who gets promoted, demoted, or fired | Performance review algorithms, layoff selection models |
| Task allocation and monitoring | AI that assigns work, monitors productivity, or evaluates worker performance | Warehouse task assignment (Amazon-style), driver monitoring (Uber/Lyft ratings), call center performance scoring |
Key point: This is the broadest high-risk category. If your AI touches hiring, firing, or worker evaluation in any way, it is almost certainly high-risk.
5. Essential Private and Public Services (Annex III.5)
AI systems used to evaluate eligibility for or grant access to essential services and benefits.
| Subcategory | Description | Examples |
|---|---|---|
| Creditworthiness and credit scoring | AI that assesses whether someone qualifies for a loan, credit card, or mortgage | Credit scoring models (FICO-style), loan approval algorithms, buy-now-pay-later eligibility checks |
| Emergency services dispatch | AI that prioritizes or routes emergency calls (police, fire, ambulance) | 911 call triage systems, ambulance dispatch optimization |
| Public benefits eligibility | AI that determines who qualifies for welfare, unemployment, housing assistance, or healthcare | Fraud detection in welfare systems, eligibility screening for public housing |
Key point: The system must affect access. An AI that helps you compare loan offers is not high-risk; an AI that decides whether you get approved is.
6. Law Enforcement (Annex III.6)
AI systems used by or on behalf of law enforcement authorities.
| Subcategory | Description | Examples |
|---|---|---|
| Risk assessment for offending | AI that predicts the likelihood someone will commit a crime | Recidivism prediction (COMPAS-style), predictive policing heat maps |
| Polygraph and lie detection | AI that assesses the veracity of statements during investigations | AI-powered lie detectors, voice stress analysis |
| Evidence evaluation | AI that analyzes evidence to support criminal investigations | DNA match probability, forensic image analysis, gunshot detection (ShotSpotter) |
| Crime analytics | AI that identifies patterns or predicts where crimes will occur | Predictive policing software, gang affiliation detection, criminal network analysis |
Key point: This category applies only to law enforcement use. The same AI used by a private company for fraud detection is not high-risk under Annex III.6 (it may be high-risk under Annex III.5 instead).
7. Migration, Asylum, and Border Control (Annex III.7)
AI systems used to manage migration, asylum applications, or border security.
| Subcategory | Description | Examples |
|---|---|---|
| Visa and asylum applications | AI that assesses eligibility for visas, asylum, or residence permits | Visa risk assessment tools, asylum claim credibility scoring |
| Border control | AI that detects illegal border crossings or verifies traveler identity | Automated passport control (e-gates), lie detection at borders, risk profiling for customs |
| Complaint examination | AI that evaluates complaints related to migration or asylum decisions | Automated review of asylum appeal documents |
Key point: This category is narrow and applies primarily to government agencies managing immigration.
8. Administration of Justice and Democratic Processes (Annex III.8)
AI systems used to assist judicial authorities or influence democratic processes.
| Subcategory | Description | Examples |
|---|---|---|
| Legal research and case law | AI that assists judges or lawyers in researching legal precedents or drafting decisions | Legal research tools (Westlaw AI, ROSS Intelligence), AI-assisted sentencing recommendations |
| Democratic processes | AI that influences election outcomes, voter behavior, or political campaigns | Voter targeting algorithms, deepfake detection in election content, AI-generated political ads |
Key point: AI used by judges to assist in sentencing or case research is high-risk. AI used by lawyers for the same purpose is generally not high-risk (unless it directly influences judicial decisions).
What If Your System Spans Multiple Categories?
If your AI system falls into more than one Annex III category, you must comply with all applicable obligations. For example:
- An AI system that screens job applicants (Annex III.4) and uses facial recognition to verify identity (Annex III.1) is high-risk under both categories.
- An AI system that assesses creditworthiness (Annex III.5) and predicts fraud risk for law enforcement (Annex III.6) is high-risk under both categories.
You cannot "choose" the easier category. Compliance obligations stack.
What If Your System Is Not on the List?
If your AI system does not fall into any Annex III category, it is not high-risk under the EU AI Act. You may still face limited obligations:
- Article 52 (Transparency): If your system interacts with humans (chatbots, deepfakes, emotion recognition), you must disclose that users are interacting with AI.
- Article 50 (General-Purpose AI): If you provide a foundation model (GPT, Claude, Mistral), you face separate obligations under Articles 53 and 54.
Most AI systems — recommendation engines, content moderation, marketing optimization, internal analytics — are not high-risk and face minimal or no EU AI Act obligations.
Common Misclassifications
Vigilia's audit engine detects several recurring classification errors:
- Overclaiming high-risk status: Providers assume their system is high-risk because it uses sensitive data or makes important decisions. The EU AI Act is use-case-specific, not risk-based in the general sense. If your system is not in Annex III, it is not high-risk.
- Underclaiming high-risk status: Providers assume their system is not high-risk because it "only assists" humans. If the system influences hiring, credit access, or law enforcement decisions, it is high-risk even if a human makes the final call.
- Ignoring edge cases: A system used for internal HR analytics is not high-risk. The same system used to rank candidates for promotion is high-risk (Annex III.4).
Vigilia's risk classification engine checks your system's intended purpose, use case, and deployment context to determine whether Annex III applies.
How to Determine If Your System Is High-Risk
Follow this decision tree:
-
Does your system fall into any Annex III category?
- No → Your system is not high-risk. Check Article 52 for transparency obligations.
- Yes → Continue to step 2.
-
Is your system used for the specific purpose listed in Annex III?
- Example: Your system uses facial recognition, but only to unlock a phone (authentication, not identification). → Not high-risk.
- Example: Your system uses facial recognition to identify individuals in a crowd. → High-risk (Annex III.1).
-
Is your system a safety component of a regulated product, or is it itself a regulated product?
- Example: Your AI controls a medical device. → High-risk (EU Medical Device Regulation + Annex III).
- Example: Your AI optimizes ad targeting. → Not high-risk (not a safety component, not in Annex III).
If you answered "yes" to all three questions, your system is high-risk and must comply with Articles 9–15, technical documentation, conformity assessment, and post-market monitoring.
Vigilia's Risk Classification Engine
Vigilia's €499 compliance audit includes a risk classification analysis. It checks:
- Whether your system falls into any Annex III category
- Whether your intended purpose triggers high-risk obligations
- Whether you are overclaiming or underclaiming high-risk status
- What compliance obligations apply (Articles 9–15, Article 52, Articles 53–54)
The report provides a clear high-risk / not high-risk determination with legal justification, so you know exactly what obligations apply.
Generate your risk classification report now: www.aivigilia.com
Timeline: When Annex III Becomes Enforceable
| Date | Milestone |
|---|---|
| August 2, 2026 | Annex III high-risk obligations enforceable |
| February 2, 2027 | Full EU AI Act enforcement (all provisions) |
You have 83 days until high-risk obligations become legally binding. Penalties apply immediately after that date.
Final Checklist: Is Your System High-Risk?
Use this checklist to assess your system:
- [ ] My system falls into at least one Annex III category (biometrics, infrastructure, education, employment, essential services, law enforcement, migration, justice)
- [ ] My system is used for the specific purpose listed in that category (not a tangential use case)
- [ ] My system influences access, evaluation, or safety in that domain (not just assistance or analytics)
- [ ] I have documented the risk classification with legal justification
- [ ] If high-risk, I have begun implementing Articles 9–15 obligations (risk management, data governance, transparency, human oversight, accuracy, cybersecurity)
If you checked the first three boxes, your system is high-risk and you must comply with all obligations. If you checked fewer than three, your system is likely not high-risk, but you should verify with a compliance audit.
Vigilia can generate a full risk classification and gap analysis in 20 minutes.
Try the free EU AI Act checker or generate your full compliance report: www.aivigilia.com
This article is for informational purposes only and does not constitute legal advice. Consult a qualified EU AI Act attorney for guidance specific to your situation.
Originally published at Vigilia.
Top comments (0)