Plantilla de Plan de Formación en Alfabetización en IA
Plan estructurado de formación en alfabetización en IA para organizaciones que despliegan sistemas de IA bajo el Artículo 4 de la Ley de IA de la UE, con objetivos por rol, módulos y niveles de competencia.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.
Template provided by VORLUX AI — vorluxai.com
AI Literacy Training Plan
EU AI Act — Article 4 Compliance Template
Organisation: _______________ Document Reference: AITP-[YYYY]-[NNN] Version: _______________ Prepared by: _______________ Approved by: _______________ Effective Date: _______________ Review Date: _______________
Section 1: Purpose and Legal Basis
Article 4 of the EU AI Act requires providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy among their staff and all other persons dealing with the operation and use of AI systems on their behalf.
This training plan documents how [Organisation Name] meets this obligation.
AI Systems in Scope:
| System Name | Risk Classification | Primary Users | Go-Live Date |
|---|---|---|---|
| ☐ High-risk ☐ Limited ☐ Minimal | |||
| ☐ High-risk ☐ Limited ☐ Minimal | |||
| ☐ High-risk ☐ Limited ☐ Minimal |
Section 2: Competency Framework
2.1 Competency Levels
This organisation defines three AI literacy competency levels:
Level 1 — Basic (Foundation)
Target audience: All employees who may encounter AI-generated outputs or work in environments where AI systems operate.
Core competencies at this level:
- Understands what AI is and what it is not (limitations, errors, hallucinations)
- Can identify when AI is being used in a process or tool
- Knows their right to request human review of AI decisions
- Understands basic data privacy considerations when using AI tools
- Knows how to report AI-related concerns internally
Assessment threshold: 70% on foundation assessment
Level 2 — Intermediate (Practitioner)
Target audience: Employees who regularly use AI systems as part of their role, including HR practitioners, customer service agents, analysts, and operations staff.
Core competencies at this level:
- All Level 1 competencies, plus:
- Understands AI risk categories under the EU AI Act (unacceptable, high, limited, minimal)
- Can identify potential bias in AI outputs and apply critical evaluation
- Knows the organisation’s AI governance policies and escalation procedures
- Understands data quality requirements for AI systems
- Can apply the organisation’s human oversight protocols
- Understands transparency obligations to affected individuals
Assessment threshold: 75% on practitioner assessment
Level 3 — Expert (Governance & Technical)
Target audience: AI system developers, data scientists, compliance officers, legal counsel, senior managers, and AI governance board members.
Core competencies at this level:
- All Level 1 and Level 2 competencies, plus:
- Deep knowledge of the EU AI Act obligations by actor type (provider, deployer, importer)
- Ability to conduct or review Fundamental Rights Impact Assessments
- Technical understanding of model evaluation, bias testing, and performance metrics
- Knows conformity assessment procedures and CE marking requirements
- Understands post-market monitoring obligations
- Can interpret AI audit logs and incident reports
- Understands international AI regulatory landscape (UK, USA, China, etc.)
Assessment threshold: 80% on expert assessment + practical case study
2.2 Competency Mapping by Role
| Role | Required Level | Training Track |
|---|---|---|
| All employees (general) | Level 1 | Foundation Track |
| Executive / Senior Management | Level 2 | Leadership Track |
| HR / People Teams | Level 2 | Practitioner Track |
| Customer Service / Operations | Level 2 | Practitioner Track |
| Marketing / Communications | Level 1–2 | Practitioner Track |
| Finance / Procurement | Level 2 | Practitioner Track |
| Legal / Compliance | Level 3 | Expert Track |
| IT / Data Engineering | Level 3 | Expert Track |
| Data Scientists / ML Engineers | Level 3 | Expert Track |
| Product Managers (AI products) | Level 3 | Expert Track |
| AI Governance Board | Level 3 | Expert Track |
| DPO / AI Officer | Level 3 | Expert Track |
Section 3: Learning Objectives by Role
3.1 All Staff — Foundation Objectives
By the end of the Foundation Track, participants will be able to:
- Define artificial intelligence and distinguish it from traditional software
- Explain in plain terms what machine learning and generative AI mean
- Recognise signs that an AI system may be producing inaccurate or biased outputs
- State their rights as an individual subject to AI-assisted decisions
- Identify the organisation’s AI usage policies
- Know who to contact with AI-related questions or concerns
- Describe the basic concept of data privacy and why it matters in AI contexts
3.2 Practitioners — Additional Objectives
By the end of the Practitioner Track, participants will also be able to:
- Identify the EU AI Act risk tier of each AI system they use
- Apply structured critical thinking when reviewing AI outputs (SCAT framework: Source, Confidence, Alternatives, Traceability)
- Execute the human oversight protocol for their specific AI system(s)
- Document and escalate AI incidents using the organisation’s procedure
- Explain transparency requirements to customers or clients
- Recognise discriminatory or unfair patterns in AI recommendations
- Complete a post-decision review when overriding an AI recommendation
3.3 Experts — Additional Objectives
By the end of the Expert Track, participants will also be able to:
- Interpret the EU AI Act Articles 9–17 (technical documentation, data governance, accuracy requirements)
- Design and interpret a bias/fairness audit report
- Conduct a FRIA or a Data Protection Impact Assessment (DPIA) for an AI system
- Evaluate vendor AI claims against regulatory standards
- Write AI governance policy and internal standards
- Brief executives, regulators, or auditors on AI compliance posture
- Manage an AI incident response from detection to post-incident review
Section 4: Training Modules
Module 1: What is AI? (Foundation — All Staff)
| Parameter | Detail |
|---|---|
| Duration | 45 minutes |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Self-paced |
| Frequency | Once on onboarding; annually refreshed |
| Prerequisites | None |
Topics covered:
- History and current state of AI
- How machine learning models are trained
- What AI can and cannot do (limitations and risks)
- Generative AI: capabilities, risks, hallucination
- AI in everyday tools (email, search, HR systems)
- Real-world AI mistakes and their consequences
Knowledge check questions (sample):
- What is the key difference between a rule-based system and a machine learning model?
- Give two examples of tasks where AI tends to make errors.
- What should you do if you suspect an AI recommendation is incorrect?
Module 2: AI at Our Organisation (Foundation — All Staff)
| Parameter | Detail |
|---|---|
| Duration | 30 minutes |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Self-paced |
| Frequency | Once on onboarding; updated when new systems are deployed |
| Prerequisites | Module 1 |
Topics covered:
- Which AI systems our organisation uses and why
- How AI outputs influence our work processes
- Our AI usage policy and code of conduct
- Data privacy basics when using AI tools
- How to report a concern or raise a question
Module 3: EU AI Act — What It Means for You (Foundation + Practitioner)
| Parameter | Detail |
|---|---|
| Duration | 60 minutes |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Recommended: live with Q&A |
| Frequency | Annually or when regulation updates |
| Prerequisites | Module 1 |
Topics covered:
- What is the EU AI Act and when does it apply?
- Risk tiers: unacceptable, high-risk, limited, minimal
- Rights of individuals affected by AI (transparency, human review, redress)
- Our obligations as a deployer
- Prohibited AI practices: social scoring, manipulation, real-time biometric surveillance
- Penalties for non-compliance
Module 4: Human Oversight and Critical Evaluation (Practitioner)
| Parameter | Detail |
|---|---|
| Duration | 90 minutes |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Live workshop recommended |
| Frequency | Annually |
| Prerequisites | Modules 1–3 |
Topics covered:
- The SCAT framework for evaluating AI outputs
- When and how to override an AI recommendation
- Documenting overrides and reasoning
- Recognising algorithmic bias in practice
- Human-in-the-loop vs. human-on-the-loop roles
- Case studies: AI failures and how human oversight would have caught them
Practical exercise: Participants review three AI recommendations in their domain and document their assessment using the SCAT framework.
Module 5: AI Governance and Compliance Deep-Dive (Expert)
| Parameter | Detail |
|---|---|
| Duration | Half-day (4 hours) |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Instructor-led with case studies |
| Frequency | Annually |
| Prerequisites | Modules 1–4 |
Topics covered:
- EU AI Act Articles 9–17: obligations for high-risk AI providers
- Technical documentation requirements
- Conformity assessment and CE marking
- Bias and fairness testing methodologies
- Post-market monitoring and incident reporting
- FRIA walkthrough
- Vendor due diligence for AI systems
- Regulatory horizon: international AI law
Assessment: Written case study (60 minutes) evaluated by AI governance lead.
Module 6: Annual Refresher (All Levels)
| Parameter | Detail |
|---|---|
| Duration | 20–30 minutes |
| Format | ☐ E-learning ☐ Live workshop ☐ Blended |
| Delivery | Self-paced |
| Frequency | Annually |
| Prerequisites | Prior year completion |
Topics covered:
- Regulatory updates from the past year
- Lessons learned from internal AI incidents
- Updates to AI systems or governance policies
- Refreshed scenarios and quiz
Section 5: Assessment Criteria
5.1 Assessment Methods
| Level | Assessment Type | Pass Threshold | Retake Policy |
|---|---|---|---|
| Level 1 (Foundation) | 20-question multiple choice quiz | 70% (14/20) | Up to 2 retakes; re-training required after 2nd fail |
| Level 2 (Practitioner) | 30-question quiz + 1 practical scenario | 75% quiz + satisfactory scenario | Up to 2 retakes |
| Level 3 (Expert) | 40-question quiz + written case study | 80% quiz + case study pass | Re-training required; case study reviewed by governance lead |
5.2 Sample Assessment Questions
Foundation Level:
-
An AI system tells you a job applicant has a low suitability score. What should you do first?
- a) Reject the applicant immediately
- b) Critically review the AI’s reasoning and consider other factors ✓
- c) Ask a colleague what they think without telling them it’s AI-generated
- d) Accept the AI’s judgment as it is objective
-
Under the EU AI Act, AI systems used for scoring individuals’ social behaviour are:
- a) Permitted with user consent
- b) Permitted for public bodies only
- c) Prohibited ✓
- d) Classified as limited-risk
Practitioner Level: 3. You notice that an AI hiring tool recommends far fewer women than men for technical roles. What is the most appropriate first step?
- a) Assume the data reflects real-world patterns
- b) Flag it as a potential bias issue to your AI Officer and suspend use pending investigation ✓
- c) Manually adjust all female candidates’ scores upward
- d) Report it externally to a regulator immediately
Expert Level: 4. Outline the key obligations of a deployer (not provider) of a high-risk AI system under Articles 26 and 29 of the EU AI Act.
(Open-ended — assessed against model answer)
Section 6: Training Schedule
6.1 Rollout Timeline
| Phase | Activity | Target Group | Deadline | Owner |
|---|---|---|---|---|
| Phase 1 | Foundation track launch | All existing staff | [Date] | HR |
| Phase 1 | Foundation track for new joiners (integrated into onboarding) | New hires | [Date] | HR |
| Phase 2 | Practitioner track | Identified practitioner roles | [Date] | AI Officer |
| Phase 3 | Expert track | Governance / tech / legal | [Date] | AI Officer |
| Phase 4 | Annual refresher cycle | All staff | [Annually] | HR + AI Officer |
6.2 Completion Tracking
| Metric | Target | Measurement Method |
|---|---|---|
| Foundation track completion rate | 100% of all staff | LMS report |
| Practitioner track completion rate | 100% of identified practitioners | LMS report |
| Expert track completion rate | 100% of identified experts | LMS report + assessment record |
| Pass rate (first attempt) | >85% | LMS report |
| Annual refresher completion | 100% by [date each year] | LMS report |
Learning Management System (LMS) used: _______________ Training records retention period: Minimum 5 years (aligned with AI Act documentation obligations)
Section 7: Governance and Review
7.1 Roles and Responsibilities
| Role | Responsibility |
|---|---|
| AI Officer / DPO | Overall ownership of AI literacy programme |
| HR Manager | Coordination with LMS, onboarding integration |
| Department Managers | Ensuring team completion; flagging gaps |
| Legal / Compliance | Regulatory update monitoring |
| IT / Security | LMS infrastructure and access management |
7.2 Programme Review Cycle
This training plan will be reviewed:
- Annually (minimum)
- When a new high-risk AI system is deployed
- When the EU AI Act or related guidance is updated
- After a significant AI incident
Last reviewed: _______________ Next scheduled review: _______________ Reviewed by: _______________
Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 4 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.
Versión Española
Aviso: Este modelo se proporciona solo a título de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a abogados calificados para asegurarse de que cumplan con las leyes y regulaciones aplicables.
Plan de capacitación en inteligencia artificial (IA)
Compatibilidad con la Directiva de IA del EU — Artículo 4
Organización: _______________ Referencia del documento: AITP-[YYYY]-[NNN] Versión: _______________ Preparado por: _______________ Aprobado por: _______________ Fecha de entrada en vigor: _______________ Fecha de revisión: _______________
Sección 1: Propósito y base legal
El artículo 4 de la Directiva de IA del EU requiere que los proveedores y usuarios de sistemas de IA tomen medidas para asegurar un nivel suficiente de alfabetización en IA entre su personal y todas las personas que manejan el funcionamiento y uso de sistemas de IA a su nombre.
Este plan de capacitación documenta cómo [Nombre de la Organización] cumple con esta obligación.
Sistemas de IA en ámbito:
| Nombre del sistema | Clasificación de riesgo | Usuarios principales | Fecha de puesta en marcha |
|---|---|---|---|
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo | |||
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo | |||
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo |
Sección 2: Marco de competencias
2.1 Niveles de competencia
Esta organización define tres niveles de competencia en alfabetización en IA:
Nivel 1 — Básico (Fundamental)
Público objetivo: Todos los empleados que pueden encontrar salidas generadas por la IA o trabajar en entornos donde operan sistemas de IA.
Competencias básicas a este nivel:
- Entiende qué es la IA y lo que no es (limitaciones, errores, alucinaciones)
- Puede identificar cuando se utiliza la IA en un proceso o herramienta
- Conoce su derecho a solicitar una revisión humana de las decisiones de la IA
- Entiende consideraciones básicas sobre privacidad de datos al usar herramientas de IA
- Conoce cómo reportar preocupaciones relacionadas con la IA internamente
Umbral de evaluación: 70% en la evaluación de fundamentos
Nivel 2 — Intermedio (Practitioner)
Público objetivo: Empleados que utilizan regularmente sistemas de IA como parte de su función, incluidos profesionales de RRHH, agentes de atención al cliente, analistas y personal de operaciones.
Competencias básicas a este nivel:
- Todas las competencias del Nivel 1, más:
- Entiende categorías de riesgo de la IA bajo la Directiva de IA del EU (inaceptable, alto, limitado, mínimo)
- Puede identificar potenciales sesgos en salidas de la IA y aplicar una evaluación crítica
- Conoce las políticas de gobernanza de la IA de la organización y los procedimientos de escalada
- Entiende los requisitos de calidad de datos para sistemas de IA
- Puede aplicar los protocolos de supervisión humana de la organización
- Entiende obligaciones de transparencia con respecto a las personas afectadas
Umbral de evaluación: 75% en la evaluación del practicante
Nivel 3 — Experto (Gobernanza y técnico)
Público objetivo: Desarrolladores de sistemas de IA, científicos de datos, oficiales de cumplimiento, consejeros legales, gerentes senior y miembros del comité de gobernanza de la IA.
Competencias básicas a este nivel:
- Todas las competencias del Nivel 1 y del Nivel 2, más:
- Conocimientos profundos sobre las obligaciones de la Directiva de IA del EU por tipo de actor (proveedor, usuario, importador)
- Capacidad para conducir o revisar evaluaciones de impacto en derechos fundamentales
- Comprensión técnica de la evaluación de modelos, pruebas de sesgo y métricas de rendimiento
- Conoce procedimientos de conformidad con las normas y requisitos de marcado CE
- Entiende obligaciones de monitoreo post-comercialización
- Puede interpretar registros de auditoría de IA e informes de incidentes
- Entiende el paisaje regulatorio internacional de la IA (Reino Unido, EE. UU., China, etc.)
Umbral de evaluación: 80% en la evaluación del experto + caso práctico
2.2 Mapeo de competencias por función
| Función | Nivel requerido | Carrera de capacitación |
|---|---|---|
| Todos los empleados (general) | Nivel 1 | Carrera de fundamentos |
| Ejecutivo / Gerente senior | Nivel 2 | Carrera de líderes |
| RRHH / Equipo de personas | Nivel 2 | Carrera del practicante |
| Atención al cliente / Operaciones | Nivel 2 | Carrera del practicante |
| Marketing / Comunicación | Nivel 1-2 | Carrera del practicante |
| Finanzas / |