Cuestionario de Diligencia Debida para Proveedores de IA
Cuestionario integral de diligencia debida para proveedores de IA con más de 35 preguntas organizadas por estado de cumplimiento, tratamiento de datos, especificaciones técnicas, seguridad, soporte y términos contractuales. Alineado con el Artículo 25 de la Ley de IA de la UE.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.
Template provided by VORLUX AI — vorluxai.com
AI Vendor Due Diligence Questionnaire
EU AI Act — Article 25 Compliance
Your Organisation: _______________ Vendor / Supplier Name: _______________ AI System / Product Evaluated: _______________ Version / Release: _______________ Questionnaire Reference: VDDQ-[YYYY]-[NNN] Completed by (your side): _______________ Completed by (vendor): _______________ Date Sent: _______________ Date Returned: _______________ Evaluation Status: ☐ Pending ☐ In review ☐ Approved ☐ Rejected ☐ Conditional approval
Instructions for Vendor
Please complete all sections fully. Where a question is not applicable, write N/A and provide a brief reason. Attach supporting documentation where indicated. Incomplete submissions will be returned.
Responses should be provided by: [Name, Role, Email] at your organisation.
Supporting documents to attach:
- EU Declaration of Conformity (if applicable)
- Technical documentation summary
- Data Processing Agreement (DPA) draft
- Most recent third-party audit report
- Sub-processor list
- Penetration testing summary (last 12 months)
- SOC 2 or ISO 27001 certificate
- Incident history summary (last 24 months)
Part 1: Company and Product Overview
| # | Question | Vendor Response |
|---|---|---|
| 1.1 | Legal name of the company and country of incorporation | |
| 1.2 | Company registration number / tax ID | |
| 1.3 | Describe your company’s primary business activity and number of employees | |
| 1.4 | How long has the company been developing or providing AI products? | |
| 1.5 | Provide the name and version of the specific AI product/system being evaluated | |
| 1.6 | Describe the AI system’s primary purpose and intended use cases | |
| 1.7 | Describe the AI techniques used (e.g. LLM, supervised ML, computer vision, rules-based) | |
| 1.8 | List the three most similar deployers / customers currently using this system |
Part 2: EU AI Act Compliance Status
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 2.1 | Under the EU AI Act, how do you classify this AI system? (Unacceptable / High-risk / Limited / Minimal) | Risk classification document | |
| 2.2 | If high-risk: have you completed a conformity assessment? Provide reference and date. | Conformity assessment certificate or reference | |
| 2.3 | If high-risk: has a CE marking been applied? Provide the notified body reference if applicable. | CE declaration | |
| 2.4 | Have you completed an EU Declaration of Conformity? If yes, attach. | DoC document | |
| 2.5 | Is the system registered in the EU AI database (where required)? Provide registration number. | Registration confirmation | |
| 2.6 | Have you implemented a Quality Management System (QMS) as required by Article 17? Describe it briefly. | QMS summary | |
| 2.7 | Is your organisation subject to a post-market monitoring plan as required by Article 72? Describe your monitoring activities. | Post-market monitoring plan | |
| 2.8 | Have you received any formal regulatory inquiries, warnings, or sanctions related to this AI system? | Regulatory correspondence (if any) | |
| 2.9 | Do you maintain a log of serious incidents related to this system? Have any been reported to authorities? | Incident log summary | |
| 2.10 | Describe how your system complies with Article 14 (human oversight measures). | Human oversight documentation |
Part 3: Data Processing and Privacy
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 3.1 | What personal data does the AI system process? List categories. | Data inventory | |
| 3.2 | Does the system process any special category data (health, biometrics, race, religion, etc.)? If yes, describe safeguards. | DPIA or safeguards documentation | |
| 3.3 | In which countries / regions is data stored, processed, or transferred? | Data flow diagram | |
| 3.4 | Do any third-country data transfers occur? If yes, what legal mechanism applies (e.g. SCCs, adequacy decision)? | Transfer mechanism documentation | |
| 3.5 | How is personal data used in training the AI model? Describe the training data pipeline. | Training data governance policy | |
| 3.6 | Can you confirm that training data does not include our organisation’s or our customers’ data without explicit consent? | Written confirmation | |
| 3.7 | Does the model retain or learn from live inference inputs? If yes, explain the mechanism and opt-out options. | Model update / retention policy | |
| 3.8 | What is your data retention policy for inference logs and outputs? | Data retention schedule | |
| 3.9 | Can you provide a Data Processing Agreement (DPA) that meets GDPR Article 28 requirements? | DPA draft | |
| 3.10 | Who are your sub-processors? Are they GDPR-compliant? Provide the sub-processor list. | Sub-processor list + compliance confirmation |
Part 4: Technical Specifications and Model Performance
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 4.1 | What performance benchmarks have been conducted? Provide accuracy, precision, recall, F1, or relevant metrics. | Model evaluation report | |
| 4.2 | On what dataset(s) was the model trained? Describe data sources, size, and vintage. | Training data card | |
| 4.3 | Has the model been tested for bias across protected characteristics (gender, age, ethnicity, disability)? Provide results. | Bias evaluation report | |
| 4.4 | Describe known limitations and failure modes of the system. | Technical documentation | |
| 4.5 | How often is the model retrained or updated? Describe the update process and version control. | Model update policy | |
| 4.6 | How do you detect and mitigate data drift or model degradation in production? | Monitoring architecture document | |
| 4.7 | Does the system produce explainable outputs? Can it generate reasons for individual decisions? | Explainability documentation | |
| 4.8 | Describe the system’s integration architecture (API, SDK, embedded model, etc.) and relevant API documentation. | API / integration docs | |
| 4.9 | What are the system’s SLA guarantees (uptime, response time, throughput)? | SLA document | |
| 4.10 | What is the process for handling performance degradation below agreed thresholds? | Escalation / SLA breach process |
Part 5: Information Security
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 5.1 | Do you hold ISO 27001 certification or equivalent? Provide certificate and scope. | ISO 27001 certificate | |
| 5.2 | Have you completed a SOC 2 Type II audit in the last 12 months? Share the report summary. | SOC 2 summary | |
| 5.3 | Describe your penetration testing programme. When was the last test? Who conducted it? | Pen test summary | |
| 5.4 | How is data encrypted in transit and at rest? Specify encryption standards used. | Security architecture document | |
| 5.5 | How is access to the AI system and its underlying data controlled? Describe your IAM approach. | IAM policy | |
| 5.6 | Have you assessed the system against adversarial attacks (prompt injection, model inversion, data poisoning)? | Adversarial testing documentation | |
| 5.7 | Describe your vulnerability disclosure and patch management process. | Security policy | |
| 5.8 | What is your security incident response time (detection to containment for critical issues)? | Incident response SLA | |
| 5.9 | Do you conduct background checks on employees with access to production systems and customer data? | HR security policy summary | |
| 5.10 | Provide your most recent security breach history summary (last 3 years). | Incident summary |
Part 6: Support and Service Continuity
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 6.1 | What support tiers do you offer and what are the response times for each? | Support SLA | |
| 6.2 | Do you offer a named account or technical success manager for enterprise customers? | ||
| 6.3 | What is your product roadmap for this system over the next 12–24 months? | Roadmap document (NDA may apply) | |
| 6.4 | What is your end-of-life / deprecation policy? How much notice will be given? | Product lifecycle policy | |
| 6.5 | Describe your business continuity and disaster recovery plan for this product. | BC/DR plan summary | |
| 6.6 | What is your Recovery Time Objective (RTO) and Recovery Point Objective (RPO)? | BC/DR plan | |
| 6.7 | What is your process for notifying customers of planned and unplanned downtime? | Incident and maintenance notification process |
Part 7: Contractual and Legal Terms
| # | Question | Vendor Response |
|---|---|---|
| 7.1 | Are you willing to include EU AI Act compliance obligations in the contract? | ☐ Yes ☐ No ☐ Subject to negotiation |
| 7.2 | Do your terms include liability provisions for harm caused by AI system failure or error? | ☐ Yes ☐ No ☐ Partial |
| 7.3 | Are you willing to include audit rights for our organisation or regulators? | ☐ Yes ☐ No ☐ Subject to scope agreement |
| 7.4 | Do you offer an escrow arrangement for the AI model or system in the event of business failure? | ☐ Yes ☐ No ☐ On request |
| 7.5 | What governing law and jurisdiction apply to your contracts? Are EU/Spanish courts available? | |
| 7.6 | What are your data portability and exit provisions? Can we export our data and model outputs on termination? | |
| 7.7 | Do you have professional indemnity and cyber liability insurance? Provide limits. | |
| 7.8 | Are there any open legal disputes, regulatory investigations, or insolvency proceedings involving your company? |
Part 8: Ethical AI and Governance
| # | Question | Vendor Response | Evidence Required |
|---|---|---|---|
| 8.1 | Does your company have an AI Ethics policy or code of conduct? Attach it. | Ethics policy | |
| 8.2 | Do you have a dedicated AI ethics board, committee, or responsible AI team? | Governance structure | |
| 8.3 | Have you committed to any external AI ethics frameworks (e.g. NIST AI RMF, IEEE, Partnership on AI)? | ||
| 8.4 | How do you engage with affected communities or civil society in the design of your AI systems? | ||
| 8.5 | Describe any measures taken to ensure the environmental sustainability of your AI systems (energy use, carbon footprint). |
Evaluation Scoring Rubric
Use this rubric to score vendor responses after review:
| Section | Max Score | Score Awarded | Notes |
|---|---|---|---|
| Part 1: Company Overview | 10 | ||
| Part 2: EU AI Act Compliance | 30 | ||
| Part 3: Data Processing & Privacy | 25 | ||
| Part 4: Technical Specifications | 20 | ||
| Part 5: Information Security | 20 | ||
| Part 6: Support & Continuity | 15 | ||
| Part 7: Contractual Terms | 15 | ||
| Part 8: Ethical AI & Governance | 10 | ||
| TOTAL | 145 |
Scoring Guide:
| Total Score | Recommendation |
|---|---|
| 130–145 | Strong approval — proceed with standard contractual safeguards |
| 110–129 | Conditional approval — resolve flagged gaps before contract |
| 85–109 | High-risk — significant remediation required before approval |
| Below 85 | Do not approve — critical gaps present |
Red Flags Log
Document any responses that require immediate escalation or disqualification:
| # | Question Reference | Red Flag Description | Escalated to | Resolution |
|---|---|---|---|---|
Final Assessment
Overall Score: _____ / 145 Recommendation: ☐ Approve ☐ Conditional Approval ☐ Reject
Summary of Key Findings:
Conditions for Approval (if conditional):
Completed by: _______________ Date: _______________ Reviewed by (Legal): _______________ Date: _______________ Approved by: _______________ Date: _______________
Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 25 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.
Versión Española
Aviso importante: Este modelo se proporciona solo con fines de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a un abogado calificado para asegurarse de que cumplan con las leyes y regulaciones aplicables.
Cuestionario de Diligencia sobre Vendedores de Inteligencia Artificial
Cumplimiento del Reglamento UE AI — Artículo 25
Su Organización: _______________ Nombre del proveedor / suministrador: _______________ Sistema o producto de IA evaluado: _______________ Versión / Lanzamiento: _______________ Referencia al cuestionario: VDDQ-[YYYY]-[NNN] Completado por (su lado): _______________ Completado por (proveedor): _______________ Fecha de envío: _______________ Fecha de devolución: _______________ Estado de la evaluación: ☐ Pendiente ☐ En revisión ☐ Aprobado ☐ Rechazado ☐ Aprobación condicional
Instrucciones para el proveedor
Por favor, complete todas las secciones. Si una pregunta no es aplicable, escriba N/A y proporcione una breve razón. Adjunte documentos de apoyo donde sea necesario. Las presentaciones incompletas serán devueltas.
Las respuestas deben proporcionarse por: [Nombre, Cargo, Correo electrónico] en su organización.
Documentos de apoyo para adjuntar:
- Declaración de conformidad UE (si es aplicable)
- Resumen de la documentación técnica
- Acuerdo de procesamiento de datos (DPA) borrador
- Informe de auditoría de terceros más reciente
- Lista de subprocesadores
- Resumen del análisis de penetración (últimos 12 meses)
- Certificado SOC 2 o ISO 27001
- Resumen de la historia de incidentes (últimos 24 meses)
Parte 1: Visión general de la empresa y el producto
| # | Pregunta | Respuesta del proveedor |
|---|---|---|
| 1.1 | Nombre legal de la empresa y país de incorporación | |
| 1.2 | Número de registro de la empresa / ID de impuestos | |
| 1.3 | Describa la actividad principal de la empresa y el número de empleados | |
| 1.4 | ¿Cuánto tiempo ha estado desarrollando o proporcionando productos de IA la empresa? | |
| 1.5 | Proporcione el nombre y versión del producto específico de IA que se está evaluando | |
| 1.6 | Describa el propósito principal del sistema de IA y los casos de uso previstos | |
| 1.7 | Describa las técnicas de IA utilizadas (por ejemplo, LLM, ML supervisado, visión por computadora, reglas basadas) | |
| 1.8 | Enumere a los tres clientes o usuarios actuales más similares que están utilizando este sistema |
Parte 2: Estado de cumplimiento del Reglamento UE AI
| # | Pregunta | Respuesta del proveedor | Documentación requerida |
|---|---|---|---|
| 2.1 | Bajo el Reglamento UE AI, ¿cómo clasifica este sistema de IA? (No aceptable / Alto riesgo / Limitado / Mínimo) | Documento de clasificación de riesgos | |
| 2.2 | Si alto riesgo: ¿ha completado una evaluación de conformidad? Proporcione la referencia y fecha. | Certificado de evaluación de conformidad o referencia | |
| 2.3 | Si alto riesgo: ¿se ha aplicado un marcado CE? Proporcione la referencia del organismo notificado si es aplicable. | Declaración de CE | |
| 2.4 | ¿Ha completado una Declaración de Conformidad UE? Si sí, adjunte. | Documento DoC | |
| 2.5 | ¿Está el sistema registrado en la base de datos de IA UE (si es necesario)? Proporcione el número de registro. | Confirmación de registro | |
| 2.6 | ¿Ha implementado un Sistema de Gestión de la Calidad (SGC) según lo requerido por el artículo 17? Describalo brevemente. | Resumen del SGC | |
| 2.7 | ¿Está su organización sujeta a un plan de monitoreo posterior al mercado según lo requerido por el artículo 72? Describe sus actividades de monitoreo. | Plan de monitoreo posterior al mercado | |
| 2.8 | ¿Ha recibido alguna consulta, advertencia o sanción regulatoria formal relacionada con este sistema de IA? | Correspondencia regulatoria (si es aplicable) | |
| 2.9 | ¿Mantiene un registro de incidentes graves relacionados con este sistema? ¿Algunos han sido informados a las autoridades? | Resumen del registro de incidentes | |
| 2.10 | Describa cómo su sistema cumple con el artículo 14 (medidas de supervisión humana). | Documentación de supervisión humana |
Parte 3: Procesamiento de datos y privacidad
| # | Pregunta | Respuesta del proveedor | Documentación requerida |
|---|---|---|---|
| 3.1 | ¿Qué datos personales procesa el sistema de IA? Enumere las categorías… |