Volver a plantillas
compliance training-plan

Plantilla de Plan de Formación en Alfabetización en IA

Plan estructurado de formación en alfabetización en IA para organizaciones que despliegan sistemas de IA bajo el Artículo 4 de la Ley de IA de la UE, con objetivos por rol, módulos y niveles de competencia.

Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.

Template provided by VORLUX AI — vorluxai.com


AI Literacy Training Plan

EU AI Act — Article 4 Compliance Template

Organisation: _______________ Document Reference: AITP-[YYYY]-[NNN] Version: _______________ Prepared by: _______________ Approved by: _______________ Effective Date: _______________ Review Date: _______________


Article 4 of the EU AI Act requires providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy among their staff and all other persons dealing with the operation and use of AI systems on their behalf.

This training plan documents how [Organisation Name] meets this obligation.

AI Systems in Scope:

System NameRisk ClassificationPrimary UsersGo-Live Date
☐ High-risk ☐ Limited ☐ Minimal
☐ High-risk ☐ Limited ☐ Minimal
☐ High-risk ☐ Limited ☐ Minimal

Section 2: Competency Framework

2.1 Competency Levels

This organisation defines three AI literacy competency levels:


Level 1 — Basic (Foundation)

Target audience: All employees who may encounter AI-generated outputs or work in environments where AI systems operate.

Core competencies at this level:

  • Understands what AI is and what it is not (limitations, errors, hallucinations)
  • Can identify when AI is being used in a process or tool
  • Knows their right to request human review of AI decisions
  • Understands basic data privacy considerations when using AI tools
  • Knows how to report AI-related concerns internally

Assessment threshold: 70% on foundation assessment


Level 2 — Intermediate (Practitioner)

Target audience: Employees who regularly use AI systems as part of their role, including HR practitioners, customer service agents, analysts, and operations staff.

Core competencies at this level:

  • All Level 1 competencies, plus:
  • Understands AI risk categories under the EU AI Act (unacceptable, high, limited, minimal)
  • Can identify potential bias in AI outputs and apply critical evaluation
  • Knows the organisation’s AI governance policies and escalation procedures
  • Understands data quality requirements for AI systems
  • Can apply the organisation’s human oversight protocols
  • Understands transparency obligations to affected individuals

Assessment threshold: 75% on practitioner assessment


Level 3 — Expert (Governance & Technical)

Target audience: AI system developers, data scientists, compliance officers, legal counsel, senior managers, and AI governance board members.

Core competencies at this level:

  • All Level 1 and Level 2 competencies, plus:
  • Deep knowledge of the EU AI Act obligations by actor type (provider, deployer, importer)
  • Ability to conduct or review Fundamental Rights Impact Assessments
  • Technical understanding of model evaluation, bias testing, and performance metrics
  • Knows conformity assessment procedures and CE marking requirements
  • Understands post-market monitoring obligations
  • Can interpret AI audit logs and incident reports
  • Understands international AI regulatory landscape (UK, USA, China, etc.)

Assessment threshold: 80% on expert assessment + practical case study


2.2 Competency Mapping by Role

RoleRequired LevelTraining Track
All employees (general)Level 1Foundation Track
Executive / Senior ManagementLevel 2Leadership Track
HR / People TeamsLevel 2Practitioner Track
Customer Service / OperationsLevel 2Practitioner Track
Marketing / CommunicationsLevel 1–2Practitioner Track
Finance / ProcurementLevel 2Practitioner Track
Legal / ComplianceLevel 3Expert Track
IT / Data EngineeringLevel 3Expert Track
Data Scientists / ML EngineersLevel 3Expert Track
Product Managers (AI products)Level 3Expert Track
AI Governance BoardLevel 3Expert Track
DPO / AI OfficerLevel 3Expert Track

Section 3: Learning Objectives by Role

3.1 All Staff — Foundation Objectives

By the end of the Foundation Track, participants will be able to:

  • Define artificial intelligence and distinguish it from traditional software
  • Explain in plain terms what machine learning and generative AI mean
  • Recognise signs that an AI system may be producing inaccurate or biased outputs
  • State their rights as an individual subject to AI-assisted decisions
  • Identify the organisation’s AI usage policies
  • Know who to contact with AI-related questions or concerns
  • Describe the basic concept of data privacy and why it matters in AI contexts

3.2 Practitioners — Additional Objectives

By the end of the Practitioner Track, participants will also be able to:

  • Identify the EU AI Act risk tier of each AI system they use
  • Apply structured critical thinking when reviewing AI outputs (SCAT framework: Source, Confidence, Alternatives, Traceability)
  • Execute the human oversight protocol for their specific AI system(s)
  • Document and escalate AI incidents using the organisation’s procedure
  • Explain transparency requirements to customers or clients
  • Recognise discriminatory or unfair patterns in AI recommendations
  • Complete a post-decision review when overriding an AI recommendation

3.3 Experts — Additional Objectives

By the end of the Expert Track, participants will also be able to:

  • Interpret the EU AI Act Articles 9–17 (technical documentation, data governance, accuracy requirements)
  • Design and interpret a bias/fairness audit report
  • Conduct a FRIA or a Data Protection Impact Assessment (DPIA) for an AI system
  • Evaluate vendor AI claims against regulatory standards
  • Write AI governance policy and internal standards
  • Brief executives, regulators, or auditors on AI compliance posture
  • Manage an AI incident response from detection to post-incident review

Section 4: Training Modules

Module 1: What is AI? (Foundation — All Staff)

ParameterDetail
Duration45 minutes
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliverySelf-paced
FrequencyOnce on onboarding; annually refreshed
PrerequisitesNone

Topics covered:

  1. History and current state of AI
  2. How machine learning models are trained
  3. What AI can and cannot do (limitations and risks)
  4. Generative AI: capabilities, risks, hallucination
  5. AI in everyday tools (email, search, HR systems)
  6. Real-world AI mistakes and their consequences

Knowledge check questions (sample):

  • What is the key difference between a rule-based system and a machine learning model?
  • Give two examples of tasks where AI tends to make errors.
  • What should you do if you suspect an AI recommendation is incorrect?

Module 2: AI at Our Organisation (Foundation — All Staff)

ParameterDetail
Duration30 minutes
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliverySelf-paced
FrequencyOnce on onboarding; updated when new systems are deployed
PrerequisitesModule 1

Topics covered:

  1. Which AI systems our organisation uses and why
  2. How AI outputs influence our work processes
  3. Our AI usage policy and code of conduct
  4. Data privacy basics when using AI tools
  5. How to report a concern or raise a question

Module 3: EU AI Act — What It Means for You (Foundation + Practitioner)

ParameterDetail
Duration60 minutes
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliveryRecommended: live with Q&A
FrequencyAnnually or when regulation updates
PrerequisitesModule 1

Topics covered:

  1. What is the EU AI Act and when does it apply?
  2. Risk tiers: unacceptable, high-risk, limited, minimal
  3. Rights of individuals affected by AI (transparency, human review, redress)
  4. Our obligations as a deployer
  5. Prohibited AI practices: social scoring, manipulation, real-time biometric surveillance
  6. Penalties for non-compliance

Module 4: Human Oversight and Critical Evaluation (Practitioner)

ParameterDetail
Duration90 minutes
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliveryLive workshop recommended
FrequencyAnnually
PrerequisitesModules 1–3

Topics covered:

  1. The SCAT framework for evaluating AI outputs
  2. When and how to override an AI recommendation
  3. Documenting overrides and reasoning
  4. Recognising algorithmic bias in practice
  5. Human-in-the-loop vs. human-on-the-loop roles
  6. Case studies: AI failures and how human oversight would have caught them

Practical exercise: Participants review three AI recommendations in their domain and document their assessment using the SCAT framework.


Module 5: AI Governance and Compliance Deep-Dive (Expert)

ParameterDetail
DurationHalf-day (4 hours)
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliveryInstructor-led with case studies
FrequencyAnnually
PrerequisitesModules 1–4

Topics covered:

  1. EU AI Act Articles 9–17: obligations for high-risk AI providers
  2. Technical documentation requirements
  3. Conformity assessment and CE marking
  4. Bias and fairness testing methodologies
  5. Post-market monitoring and incident reporting
  6. FRIA walkthrough
  7. Vendor due diligence for AI systems
  8. Regulatory horizon: international AI law

Assessment: Written case study (60 minutes) evaluated by AI governance lead.


Module 6: Annual Refresher (All Levels)

ParameterDetail
Duration20–30 minutes
Format☐ E-learning ☐ Live workshop ☐ Blended
DeliverySelf-paced
FrequencyAnnually
PrerequisitesPrior year completion

Topics covered:

  1. Regulatory updates from the past year
  2. Lessons learned from internal AI incidents
  3. Updates to AI systems or governance policies
  4. Refreshed scenarios and quiz

Section 5: Assessment Criteria

5.1 Assessment Methods

LevelAssessment TypePass ThresholdRetake Policy
Level 1 (Foundation)20-question multiple choice quiz70% (14/20)Up to 2 retakes; re-training required after 2nd fail
Level 2 (Practitioner)30-question quiz + 1 practical scenario75% quiz + satisfactory scenarioUp to 2 retakes
Level 3 (Expert)40-question quiz + written case study80% quiz + case study passRe-training required; case study reviewed by governance lead

5.2 Sample Assessment Questions

Foundation Level:

  1. An AI system tells you a job applicant has a low suitability score. What should you do first?

    • a) Reject the applicant immediately
    • b) Critically review the AI’s reasoning and consider other factors ✓
    • c) Ask a colleague what they think without telling them it’s AI-generated
    • d) Accept the AI’s judgment as it is objective
  2. Under the EU AI Act, AI systems used for scoring individuals’ social behaviour are:

    • a) Permitted with user consent
    • b) Permitted for public bodies only
    • c) Prohibited ✓
    • d) Classified as limited-risk

Practitioner Level: 3. You notice that an AI hiring tool recommends far fewer women than men for technical roles. What is the most appropriate first step?

  • a) Assume the data reflects real-world patterns
  • b) Flag it as a potential bias issue to your AI Officer and suspend use pending investigation ✓
  • c) Manually adjust all female candidates’ scores upward
  • d) Report it externally to a regulator immediately

Expert Level: 4. Outline the key obligations of a deployer (not provider) of a high-risk AI system under Articles 26 and 29 of the EU AI Act.

(Open-ended — assessed against model answer)


Section 6: Training Schedule

6.1 Rollout Timeline

PhaseActivityTarget GroupDeadlineOwner
Phase 1Foundation track launchAll existing staff[Date]HR
Phase 1Foundation track for new joiners (integrated into onboarding)New hires[Date]HR
Phase 2Practitioner trackIdentified practitioner roles[Date]AI Officer
Phase 3Expert trackGovernance / tech / legal[Date]AI Officer
Phase 4Annual refresher cycleAll staff[Annually]HR + AI Officer

6.2 Completion Tracking

MetricTargetMeasurement Method
Foundation track completion rate100% of all staffLMS report
Practitioner track completion rate100% of identified practitionersLMS report
Expert track completion rate100% of identified expertsLMS report + assessment record
Pass rate (first attempt)>85%LMS report
Annual refresher completion100% by [date each year]LMS report

Learning Management System (LMS) used: _______________ Training records retention period: Minimum 5 years (aligned with AI Act documentation obligations)


Section 7: Governance and Review

7.1 Roles and Responsibilities

RoleResponsibility
AI Officer / DPOOverall ownership of AI literacy programme
HR ManagerCoordination with LMS, onboarding integration
Department ManagersEnsuring team completion; flagging gaps
Legal / ComplianceRegulatory update monitoring
IT / SecurityLMS infrastructure and access management

7.2 Programme Review Cycle

This training plan will be reviewed:

  • Annually (minimum)
  • When a new high-risk AI system is deployed
  • When the EU AI Act or related guidance is updated
  • After a significant AI incident

Last reviewed: _______________ Next scheduled review: _______________ Reviewed by: _______________


Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 4 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.


Versión Española

Aviso: Este modelo se proporciona solo a título de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a abogados calificados para asegurarse de que cumplan con las leyes y regulaciones aplicables.


Plan de capacitación en inteligencia artificial (IA)

Compatibilidad con la Directiva de IA del EU — Artículo 4

Organización: _______________ Referencia del documento: AITP-[YYYY]-[NNN] Versión: _______________ Preparado por: _______________ Aprobado por: _______________ Fecha de entrada en vigor: _______________ Fecha de revisión: _______________


El artículo 4 de la Directiva de IA del EU requiere que los proveedores y usuarios de sistemas de IA tomen medidas para asegurar un nivel suficiente de alfabetización en IA entre su personal y todas las personas que manejan el funcionamiento y uso de sistemas de IA a su nombre.

Este plan de capacitación documenta cómo [Nombre de la Organización] cumple con esta obligación.

Sistemas de IA en ámbito:

Nombre del sistemaClasificación de riesgoUsuarios principalesFecha de puesta en marcha
☐ Alto riesgo ☐ Limitado ☐ Mínimo
☐ Alto riesgo ☐ Limitado ☐ Mínimo
☐ Alto riesgo ☐ Limitado ☐ Mínimo

Sección 2: Marco de competencias

2.1 Niveles de competencia

Esta organización define tres niveles de competencia en alfabetización en IA:


Nivel 1 — Básico (Fundamental)

Público objetivo: Todos los empleados que pueden encontrar salidas generadas por la IA o trabajar en entornos donde operan sistemas de IA.

Competencias básicas a este nivel:

  • Entiende qué es la IA y lo que no es (limitaciones, errores, alucinaciones)
  • Puede identificar cuando se utiliza la IA en un proceso o herramienta
  • Conoce su derecho a solicitar una revisión humana de las decisiones de la IA
  • Entiende consideraciones básicas sobre privacidad de datos al usar herramientas de IA
  • Conoce cómo reportar preocupaciones relacionadas con la IA internamente

Umbral de evaluación: 70% en la evaluación de fundamentos


Nivel 2 — Intermedio (Practitioner)

Público objetivo: Empleados que utilizan regularmente sistemas de IA como parte de su función, incluidos profesionales de RRHH, agentes de atención al cliente, analistas y personal de operaciones.

Competencias básicas a este nivel:

  • Todas las competencias del Nivel 1, más:
  • Entiende categorías de riesgo de la IA bajo la Directiva de IA del EU (inaceptable, alto, limitado, mínimo)
  • Puede identificar potenciales sesgos en salidas de la IA y aplicar una evaluación crítica
  • Conoce las políticas de gobernanza de la IA de la organización y los procedimientos de escalada
  • Entiende los requisitos de calidad de datos para sistemas de IA
  • Puede aplicar los protocolos de supervisión humana de la organización
  • Entiende obligaciones de transparencia con respecto a las personas afectadas

Umbral de evaluación: 75% en la evaluación del practicante


Nivel 3 — Experto (Gobernanza y técnico)

Público objetivo: Desarrolladores de sistemas de IA, científicos de datos, oficiales de cumplimiento, consejeros legales, gerentes senior y miembros del comité de gobernanza de la IA.

Competencias básicas a este nivel:

  • Todas las competencias del Nivel 1 y del Nivel 2, más:
  • Conocimientos profundos sobre las obligaciones de la Directiva de IA del EU por tipo de actor (proveedor, usuario, importador)
  • Capacidad para conducir o revisar evaluaciones de impacto en derechos fundamentales
  • Comprensión técnica de la evaluación de modelos, pruebas de sesgo y métricas de rendimiento
  • Conoce procedimientos de conformidad con las normas y requisitos de marcado CE
  • Entiende obligaciones de monitoreo post-comercialización
  • Puede interpretar registros de auditoría de IA e informes de incidentes
  • Entiende el paisaje regulatorio internacional de la IA (Reino Unido, EE. UU., China, etc.)

Umbral de evaluación: 80% en la evaluación del experto + caso práctico


2.2 Mapeo de competencias por función

FunciónNivel requeridoCarrera de capacitación
Todos los empleados (general)Nivel 1Carrera de fundamentos
Ejecutivo / Gerente seniorNivel 2Carrera de líderes
RRHH / Equipo de personasNivel 2Carrera del practicante
Atención al cliente / OperacionesNivel 2Carrera del practicante
Marketing / ComunicaciónNivel 1-2Carrera del practicante
Finanzas /
EU AI Act: 99 días para el deadline

15 minutos para evaluar su caso

Consultoría inicial sin compromiso. Analizamos su infraestructura y le recomendamos la arquitectura híbrida óptima.

Sin compromiso 15 minutos Propuesta personalizada

136 páginas de recursos gratuitos · 26 plantillas de compliance · 22 dispositivos certificados