Article 6 + Annex III

What counts as a high-risk AI system

Under Article 6 of the EU AI Act, an AI system is high-risk if it is either (a) a safety component of, or itself, a product covered by Union harmonisation law in Annex I, or (b) listed in Annex III — eight categories ranging from biometrics to administration of justice. Article 6(3) allows a narrow exemption from Annex III where the system poses no significant risk of harm. Most provider obligations (Articles 8–15) apply to high-risk systems from 2 August 2026.

Source: Regulation (EU) 2024/1689 Article 6 and Annex III.

Annex III categories

Eight domains where AI is presumptively high-risk.

1. Biometrics

AI systems used for biometric identification of natural persons (excluding mere verification confirming the person is who they claim to be), biometric categorisation according to sensitive or protected attributes, and emotion recognition — to the extent permitted under Article 5.

Examples: Face-recognition access control at scale, biometric categorisation by inferred race/political opinion/sexual orientation, emotion-recognition pre-screening interviews (subject to Article 5(1)(f) prohibition in workplaces and education).

2. Critical infrastructure

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or supply of water, gas, heating, or electricity.

Examples: Grid stability optimisers, traffic-light control AI, autonomous water-treatment process control.

3. Education and vocational training

AI systems used to determine access or admission to educational institutions, evaluate learning outcomes, assess the appropriate level of education for an individual, and detect prohibited behaviour during tests.

Examples: University-admissions ranking systems, automated essay scoring at scale, exam-cheating detection AI.

4. Employment, workers management, and access to self-employment

AI systems used for recruitment or selection of natural persons (job advertising, screening, evaluating candidates), and AI used to make decisions affecting work-related contractual relationships, promotion, termination, task allocation, or performance monitoring.

Examples: CV-screening AI, automated promotion-recommendation, productivity-scoring AI for performance reviews.

5. Access to and enjoyment of essential private services and essential public services and benefits

AI systems used by public authorities to assess eligibility for public assistance benefits and services; AI for credit-scoring or creditworthiness evaluation of natural persons (excluding for purposes of detecting financial fraud); AI for risk assessment and pricing in life and health insurance; AI used to dispatch or prioritise emergency response services.

Examples: Welfare-benefits eligibility models, retail credit-scoring, automated triage in emergency call centres.

6. Law enforcement

AI systems used by or on behalf of law enforcement authorities for risk assessment of victims and suspects, polygraphs and similar tools, evaluation of evidence reliability, profiling for crime prediction, and detection of deepfakes — subject to Article 5 prohibitions on certain real-time and predictive uses.

Examples: Predictive-policing risk scores (where not prohibited by Art. 5), evidence-evaluation AI in criminal cases.

7. Migration, asylum and border control

AI systems used by competent authorities for polygraphs and similar tools, risk assessments concerning persons intending to cross or who have crossed external borders, examination of asylum and visa applications, and detection or recognition for migration purposes.

Examples: Visa-application risk scoring, automated border-crossing risk assessment.

8. Administration of justice and democratic processes

AI systems intended to assist judicial authorities in researching and interpreting facts and applying the law to a concrete set of facts, and AI used to influence the outcome of elections or voting behaviour.

Examples: Judicial-decision-support AI, election-influence content generation systems (note: Art. 50 transparency obligations layer on top).

Article 6(3)

The exemption pathway

An Annex III system is NOT considered high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.

Specifically, an Annex III system may be excluded if any of the following four conditions is met:

  • 6(3)(a)The AI system is intended to perform a narrow procedural task.
  • 6(3)(b)The AI system is intended to improve the result of a previously completed human activity.
  • 6(3)(c)The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review.
  • 6(3)(d)The AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

Hard exception: Profiling of natural persons is ALWAYS considered high-risk regardless of the above conditions (Art. 6(3) second subparagraph).

A provider claiming the Article 6(3) exemption must document the assessment before placing the system on the market and register it under Article 49(2) — the EU AI Office can request the documentation.

Practical assessment

RiskForge includes Annex III pattern matching — pre-populated risk items for known high-risk scenarios (credit scoring, hiring, facial recognition, medical diagnosis) — and produces an audit-trailed Risk Management File aligned with Annex IV documentation requirements.

bashpip install riskforge

When "is this high-risk?" becomes "what do we do about it?"

Tools surface classification. Programmes resolve it.

Annex III classification is the first half. The second half — programme design, target operating-model, board narrative — is the surface AskAjay covers, the advisory arm of AI Exponent LLC.

Explore the MVG framework at AskAjay.ai →