View all articles
EU AI ActComplianceSMEsEdge AIGDPRSpain

EU AI Act Compliance Guide 2026: What Spanish SMEs Must Do Now

VA
VORLUX AI
|

EU AI Act Compliance Guide 2026: What Spanish SMEs Must Do Now

The EU AI Act is no longer a future regulation — it is active law with real enforcement deadlines. If your company uses any AI system, from a chatbot on your website to an automated hiring tool, you have legal obligations right now. This guide breaks down exactly where we stand in April 2026 and what Spanish small and medium enterprises need to do before the final high-risk deadline hits in August.

EU AI Act implementation timeline

gantt
    title EU AI Act Compliance Timeline
    dateFormat  YYYY-MM-DD
    axisFormat  %b %Y

    section Prohibited Practices
    Art. 5 — Banned AI practices       :crit, done, p1, 2025-02-02, 1d

    section GPAI Obligations
    Prepare GPAI documentation          :active, prep1, 2025-04-01, 2025-08-02
    Art. 51-53 — GPAI rules apply       :crit, gpai, 2025-08-02, 1d

    section High-Risk AI
    Conduct AI inventory & classify     :hr1, 2025-08-03, 2026-02-01
    Implement oversight & risk mgmt     :hr2, 2026-02-01, 2026-06-01
    Conformity assessments              :hr3, 2026-06-01, 2026-08-02
    Art. 6-43 — Full enforcement        :crit, milestone, hr4, 2026-08-02, 1d

    section Annex I High-Risk
    Extended high-risk deadline         :ann1, 2026-08-03, 2027-08-02
    Annex I fully enforceable           :crit, milestone, ann2, 2027-08-02, 1d

The Three Key Dates You Cannot Ignore

The EU AI Act entered into force on August 1, 2024, but its provisions roll out in phases. Here is the timeline that matters:

February 2, 2025 — Prohibited Practices (ALREADY IN EFFECT)

Eight categories of AI are now banned across the EU. These include social scoring systems, manipulative AI that exploits vulnerabilities, real-time biometric identification in public spaces (with narrow law-enforcement exceptions), and emotion recognition in workplaces and schools. If any of your AI systems fall into these categories, you are already in violation.

Most SMEs do not operate in these categories, but you should audit your tools to confirm. Some third-party AI plugins or SaaS products might include features that cross these lines without your knowledge.

August 2, 2025 — General-Purpose AI (GPAI) Obligations (COMING IN 4 MONTHS)

This is the next major deadline. If you develop or deploy a general-purpose AI model — or use one as a foundation for your products — new transparency and documentation requirements apply. GPAI providers must publish model cards, document training data summaries, and implement copyright compliance measures.

For SMEs that use models like Llama, Qwen, or Gemma in their products, this means you need clear documentation of which models you use, how they were trained, and what safeguards you have in place. Open-source models with permissive licenses (Apache 2.0, MIT) benefit from some exemptions, but the transparency obligations still apply if you deploy them commercially.

August 2, 2026 — High-Risk AI Systems (THE BIG ONE)

The full compliance framework for high-risk AI systems takes effect. High-risk categories include AI used in employment and worker management, education assessment, credit scoring, law enforcement support, and critical infrastructure management. Companies deploying high-risk AI must implement risk management systems, data governance, technical documentation, human oversight, and conformity assessments.

Risk Classification: Where Does Your AI Fit?

The EU AI Act classifies all AI systems into four tiers:

Risk LevelExamplesWhat You Must Do
UnacceptableSocial scoring, manipulative subliminal AI, real-time public biometric IDBanned outright since February 2025
High RiskHR screening tools, credit scoring, medical diagnosis AI, educational assessmentFull compliance: risk management, data governance, human oversight, conformity assessment
Limited RiskChatbots, AI content generators, recommendation systemsTransparency: users must know they are interacting with AI
Minimal RiskSpam filters, inventory optimization, internal analyticsNo specific obligations, but good practices encouraged

The reality for most Spanish SMEs: Your AI likely falls into “limited risk” — customer-facing chatbots, document automation, content generation, or internal process optimization. You need transparency labels and basic documentation, but not the full compliance audit required for high-risk systems.

However, if you use AI for anything related to hiring, employee evaluation, customer creditworthiness, or public-facing decisions that affect people’s rights, you are in high-risk territory and need to start preparing now.

What Spanish SMEs Need to Do RIGHT NOW

1. Conduct an AI Inventory

List every AI tool your company uses. Include SaaS subscriptions (ChatGPT, Copilot, Jasper), embedded AI features in your existing software, custom models, and any automated decision-making systems. You cannot comply with what you do not know about.

2. Classify Each System by Risk Level

Map each AI tool to the EU risk categories above. Most will be minimal or limited risk. Flag anything that touches hiring, credit, education, or public-facing automated decisions.

3. Implement Transparency Measures

For all limited-risk AI: add clear labels so users know they are interacting with AI. Your chatbot needs a visible “Powered by AI” notice. AI-generated content should be marked. This is already best practice and will become legally required.

4. Document Your AI Systems

Prepare technical documentation for your AI deployments. Include the model used, its intended purpose, training data provenance (especially for open-source models), and any safeguards or human oversight in place. This documentation is crucial for GPAI compliance by August 2025.

5. Designate a Compliance Point Person

Even small companies need someone responsible for AI compliance. This does not have to be a full-time role, but someone needs to track regulatory updates and ensure your systems stay aligned.

How Local/Edge AI Gives You a Compliance Advantage

Here is where the architecture of your AI deployment matters enormously for compliance. Running AI locally — on your own hardware, in your own office — creates a natural data sovereignty boundary that simplifies multiple compliance requirements simultaneously.

Data Sovereignty by Design

When your AI runs on a local device (a Mac Mini, an NVIDIA Jetson, or an Intel NUC sitting in your office), your data never leaves your premises. This is not just a privacy feature — it is a compliance shortcut. GDPR Article 44 restricts international data transfers. The EU AI Act layers additional data governance requirements on top. Local AI eliminates the entire category of cross-border data transfer risk.

Simplified GPAI Documentation

When you deploy an open-source model locally, you control the entire stack. You know exactly which model version is running, what quantization is applied, and what data flows through it. This makes technical documentation straightforward compared to documenting your usage of an opaque cloud API that can change without notice.

Reduced Attack Surface

Fewer network connections mean fewer compliance risks. A local AI system does not expose your data to third-party processors, reducing both your GDPR processor agreements and your AI Act risk assessment scope.

Cost Predictability

Cloud AI billing is variable and hard to budget. Local hardware is a one-time purchase with predictable energy costs. For SMEs managing tight budgets, this predictability also simplifies compliance cost reporting.

The Vorlux AI Approach

At Vorlux AI, we deploy small language models (SLMs) on local hardware specifically designed for Spanish SME compliance needs. Our edge AI deployments run models like Qwen 2.5, Gemma 4, and Phi-4 on devices that sit in your office. Your data stays in Spain, on your network, under your control.

This is not just a technology choice — it is a compliance strategy. By keeping AI local, you reduce your regulatory surface area while gaining the productivity benefits of AI automation.

Next Steps

The August 2026 deadline is four months away. The time to prepare is now — not when enforcement actions begin.


Sources: EU AI Act Official · EU AI Act Service Desk

Vorlux AI helps Spanish SMEs deploy AI that is private, compliant, and cost-effective. Our edge AI solutions run on local hardware, keeping your data sovereign while meeting EU AI Act and GDPR requirements.


Ready to Get Started?

VORLUX AI helps Spanish and European businesses deploy AI solutions that stay on your hardware, under your control. Whether you need edge AI deployment, LMS integration, or EU AI Act compliance consulting — we can help.

Book a free discovery call to discuss your AI strategy, or explore our services to see how we work.

Share: LinkedIn X
Newsletter

Access exclusive resources

Subscribe to unlock 230+ workflows, 43 agents, and 26 professional templates. Weekly insights, no spam.

Bonus: Free EU AI Act checklist when you subscribe
Once a week No spam Unsubscribe anytime
EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices