AI-powered CV screening and resume filtering is classified as high-risk under the EU AI Act. Here's the legal basis, your obligations, and what to do next.
Yes. AI-powered CV screening is classified as high-risk under the EU AI Act (Article 6(2), Annex III Point 4(a)). This applies whether your system fully automates candidate filtering or assists human recruiters by ranking, scoring, or shortlisting applicants.
If you deploy AI to process job applications in the EU market, you have specific legal obligations that take effect on 2 August 2026.
The EU AI Act designates certain AI use cases as high-risk through a two-step classification. Annex III lists eight categories of high-risk AI systems. Point 4(a) covers:
AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to screen or filter applications, or to evaluate candidates.
Your AI CV screener falls squarely within this definition. Article 6(2) confirms that AI systems referred to in Annex III are considered high-risk.
The scope is broad. It doesn't matter whether your system:
If AI is involved in any stage of filtering or evaluating job applicants, Annex III Point 4(a) applies.
As a deployer of a high-risk AI system, Article 26 imposes specific obligations. Here's what you need to have in place before August 2026:
Human oversight. You must ensure that a qualified person supervises the AI system's outputs. In practice, this means a human recruiter reviews the AI's recommendations before candidates are rejected. Fully automated rejection without human review creates compliance risk.
Transparency to candidates. You must inform applicants that AI is being used in the recruitment process. This isn't optional — Article 50 requires clear disclosure when people are subject to decisions made or substantially assisted by AI.
Fundamental Rights Impact Assessment (FRIA). Under Article 27, deployers of high-risk AI systems used in employment must conduct an impact assessment examining potential effects on fundamental rights, including non-discrimination, privacy, and equal treatment.
Technical documentation access. You should request the provider's technical documentation and instructions for use. As a deployer, you're responsible for using the system in accordance with these instructions.
Incident monitoring. If the system produces outcomes that suggest risks to health, safety, or fundamental rights — for example, systematic bias against certain demographic groups — you must report this to the provider and relevant authorities.
Using GPT-4, Claude, or another LLM for CV screening doesn't change the classification. Your system is classified based on its intended purpose (recruitment screening), not the underlying technology. Whether you built a custom model or prompt an API, the Annex III Point 4(a) classification applies to your deployment.
The Article 6(3) exemption probably doesn't apply here. Article 6(3) allows certain Annex III systems to be excluded from high-risk classification if they don't pose a "significant risk of harm." Employment decisions directly affect people's livelihoods — arguing that your CV screener poses no significant risk is a difficult position to defend.
The Digital Omnibus proposal could extend the compliance deadline. The European Commission has proposed pushing the Annex III enforcement date to 2 December 2027. As of March 2026, this hasn't been adopted. Plan for August 2026 — if you get more time, you'll be ahead of competitors who waited.
If you only use AI internally (screening your own company's applicants rather than selling a screening product), you're a deployer under Article 26, not a provider. Your obligations are lighter than the provider's, but they're still legally binding.
Not sure about your specific system? Classify it for free in under 10 minutes.
Classify Now — FreeIf you're the company that builds and sells the AI CV screening tool (not just deploying one), you're classified as a provider of a high-risk AI system. Your obligations under Articles 8-15 are significantly more extensive:
The compliance cost and effort for providers is substantially higher than for deployers.
Every AI system has a risk classification under the EU AI Act. Find yours in under 10 minutes.
Classify Your System