The EU AI Act August 2026 Deadline Is 4 Months Away — Here's Your Action Plan
The EU AI Act August 2026 Deadline Is 4 Months Away — Here’s Your Action Plan
You have four months. On August 2, 2026, the EU AI Act’s full enforcement of Annex III high-risk AI systems goes live. After that date, every company deploying AI in employment, credit scoring, education, law enforcement, biometrics, or critical infrastructure must be fully compliant — or face penalties of up to EUR 35 million or 7% of global annual revenue, whichever is higher.
This is not a drill. As of April 5, 2026, an estimated 78% of EU enterprises were still non-compliant with the high-risk provisions. If you are reading this and have not started your compliance process, the window is closing fast — but it is not closed yet.
Here is exactly what you need to do, month by month, between now and August.

gantt
title EU AI Act Compliance Countdown
dateFormat YYYY-MM
axisFormat %b %Y
section Deadlines
Prohibited AI banned :done, 2025-02, 2025-02
GPAI transparency rules :done, 2025-08, 2025-08
HIGH-RISK ENFORCEMENT :crit, 2026-08, 2026-08
Full enforcement :2027-08, 2027-08
section Your Actions
Classify AI systems :active, 2026-04, 2026-05
Risk assessments :2026-05, 2026-06
Technical documentation :2026-06, 2026-07
Conformity assessment :2026-07, 2026-08
Is Your AI High-Risk? A Self-Assessment
Before anything else, you need to determine whether your AI systems fall under Annex III. Use this table to check:
| High-Risk Category | Examples | Annex III Reference |
|---|---|---|
| Employment & worker management | CV screening tools, automated interview scoring, workforce monitoring, promotion algorithms | Annex III, 4(a)-(b) |
| Credit & financial assessment | Credit scoring models, loan approval automation, insurance risk profiling | Annex III, 5(b) |
| Education & vocational training | Automated grading, student admission algorithms, learning path assignment | Annex III, 3(a)-(b) |
| Law enforcement | Predictive policing, evidence analysis, suspect profiling, recidivism risk scoring | Annex III, 6(a)-(g) |
| Biometric identification | Facial recognition for access control, emotion detection in interviews, remote biometric ID | Annex III, 1(a)-(b) |
| Critical infrastructure | AI managing energy grids, water treatment, traffic control, telecom networks | Annex III, 2(a)-(b) |
| Migration & border control | Visa application screening, border surveillance, asylum claim processing | Annex III, 7(a)-(d) |
| Justice & democratic processes | Sentencing assistance tools, AI used in elections or referendum processes | Annex III, 8(a)-(b) |
If any of your AI systems touch these categories, you are subject to the full high-risk compliance framework. If you are unsure, that uncertainty itself is a compliance risk — get an assessment done now.
For a deeper look at the risk classification system, read our AI Risk Classification Guide.
What Compliance Actually Requires
The EU AI Act does not just ask you to be careful. It mandates specific, documented, auditable actions:
- Risk management system — a living process to identify, evaluate, and mitigate risks throughout the AI system’s lifecycle.
- Data governance — documented data quality standards, bias testing, and training data provenance.
- Technical documentation — detailed descriptions of the system’s purpose, architecture, training methodology, performance metrics, and known limitations.
- Logging requirements (Article 12) — AI agents and automated systems must maintain traceable logs of their decision-making processes. As of the April 16, 2026 guidance update, this explicitly includes agentic AI systems that take autonomous actions.
- Transparency obligations (Article 50) — chatbots must disclose their AI nature to users. AI-generated content including deepfakes must carry watermarks or machine-readable metadata.
- Human oversight — mechanisms that allow a human operator to understand, monitor, and override AI decisions.
- Conformity assessment — a formal evaluation that your system meets all requirements, resulting in CE marking.
- EU database registration — high-risk AI systems must be registered in the EU’s public database before deployment.
For a complete compliance walkthrough, see our EU AI Act Compliance Guide.
Your Month-by-Month Countdown Checklist
April 2026 — Audit and Classify
- Inventory all AI systems in your organization, including third-party tools and embedded AI features in SaaS products
- Classify each system against the Annex III categories above
- Identify your role for each system — are you a provider (developer) or deployer (user)?
- Assign a compliance lead — someone must own this process internally
- Review prohibited practices — confirm none of your systems fall under the 8 banned categories (these have been enforceable since February 2025)
May 2026 — Document and Assess
- Draft technical documentation for every high-risk system — architecture, training data, performance benchmarks, known limitations
- Conduct bias and fairness testing on training datasets and model outputs
- Implement logging that meets Article 12 requirements — traceable, timestamped, tamper-resistant
- Map your data governance — where does training data come from? How is it validated? How is it stored?
- Begin conformity assessment preparation — self-assessment for most categories, third-party audit for biometric systems
June 2026 — Implement and Test
- Deploy transparency mechanisms — chatbot disclosure, deepfake watermarking, AI-generated content labeling
- Build human oversight controls — manual override capabilities, monitoring dashboards, escalation procedures
- Run the conformity assessment — complete the formal evaluation process
- Prepare CE marking documentation
- Train your staff — operators of high-risk AI must understand the system’s capabilities, limitations, and override procedures
July 2026 — Register and Verify
- Register high-risk systems in the EU database
- Conduct a final compliance review — walk through every requirement against your documentation
- Test incident reporting procedures — you must be able to report serious incidents to national authorities
- Verify third-party compliance — if you use AI tools from vendors, confirm they have completed their provider obligations
- Document everything — if it is not written down, it did not happen
Who Enforces This?
The European AI Office oversees general-purpose AI models and coordinates cross-border enforcement. National competent authorities in each EU member state handle high-risk AI system supervision. In Spain, the designated supervisory authority operates under the Agencia Espanola de Supervision de Inteligencia Artificial (AESIA).
Enforcement will be real. The penalty structure — up to EUR 35M or 7% of global revenue — is deliberately modeled after GDPR to ensure it cannot be dismissed as a cost of doing business.
Do Not Wait Until July
If you are among the 78% of enterprises that have not started this process, the single most important thing you can do right now is know what you are dealing with. Classify your systems. Understand your obligations. Then build a plan.
The EU AI Act is the most comprehensive AI regulation in the world. It is not going away, and August 2 is not moving. But four months is enough time to get compliant — if you start today.
For detailed guidance, start with our comprehensive EU AI Act resource page.
Sources: artificialintelligenceact.eu, legalnodes.com, secureprivacy.ai
Related reading
- AESIA: What Spain’s AI Watchdog Means for Your Business
- EU AI Act Compliance Guide 2026: What Spanish SMEs Must Do Now
- GDPR and AI Convergence in 2026: Why Local Deployment Is the Only Clean Answer
Need Help Getting Compliant?
VORLUX AI offers a free compliance assessment consultation for EU-based companies. We will review your AI systems, classify your risk level, and give you a clear action plan — no strings attached.
Book your free assessment or email us at hello@vorluxai.com. You have four months. Let’s make them count.