Volver a plantillas
compliance checklist

Lista de Verificación de Prácticas de IA Prohibidas (Art. 5)

Lista de verificación completa de autoevaluación para las 8 prácticas de IA prohibidas bajo el Artículo 5 de la Ley de IA de la UE, con ejemplos, casos límite, preguntas de autoevaluación y procedimientos de respuesta a incidentes.

Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

Prohibited AI Practices Checklist — EU AI Act Article 5

Disclaimer: This is guidance only, not legal advice. Consult qualified legal counsel for your specific compliance obligations. Operating a prohibited AI system carries significant penalties under the EU AI Act, including fines of up to EUR 35,000,000 or 7% of total worldwide annual turnover.

Template provided by VORLUX AI | vorluxai.com


Purpose and Scope

Article 5 of the EU AI Act bans certain AI practices outright — they cannot be authorised, exempted, or justified by proportionality. No conformity assessment procedure can make them lawful. This checklist must be completed:

  • Before developing or deploying any AI system
  • When the intended purpose or capabilities of an existing system change
  • When a new use case is identified for an existing AI system
  • Annually as part of ongoing compliance review
FieldValue
AI System / Use Case Name___________________________
Assessment Date____-__-__
Assessor Name___________________________
Assessor Role___________________________
Reviewed by (Legal)___________________________
Review Date____-__-__

How to Complete This Checklist

For each of the 8 prohibited practices:

  1. Read the prohibition description carefully
  2. Answer all self-assessment questions honestly
  3. Review the edge cases — if any apply to your system, seek legal advice before proceeding
  4. Record your determination: ☐ Practice does NOT apply | ☐ UNCERTAIN — escalate | ☐ Practice MAY apply — STOP

Any “UNCERTAIN” or “MAY apply” determination must be escalated to legal counsel before the system is developed, deployed, or continued in operation.


Prohibited Practice 1 — Subliminal or Manipulative Techniques

Article 5(1)(a)

The Prohibition

AI systems that deploy subliminal techniques beyond a person’s consciousness, or deliberately exploit psychological weaknesses or vulnerabilities of individuals or specific groups, in a way that is likely to cause harm to those persons by distorting their behaviour.

Self-Assessment Questions

#QuestionYesNoUnsure
1.1Does the system present information or stimuli designed to operate below the level of conscious awareness?
1.2Does the system use psychological profiling to identify and exploit individual weaknesses?
1.3Is the system designed to produce behavioural changes in users without their awareness?
1.4Does the system use techniques specifically designed to bypass rational decision-making?
1.5Could the system cause persons to make choices that harm their interests without being aware they are being influenced?

Examples of Prohibited Conduct

  • Images flashed faster than the human eye can consciously perceive, designed to influence purchasing behaviour
  • AI that detects emotional vulnerability (e.g., grief, anxiety) from voice patterns and serves manipulative content at those moments
  • Personalised dark patterns that exploit cognitive biases identified from user data
  • Subliminal audio embedded in content to promote specific products or views
  • Your system uses persuasion or recommendation engines — the line between lawful persuasion and manipulation depends on intent to harm and bypassing consciousness
  • Your system targets advertising at users based on psychological profiles
  • Your system adjusts its approach based on detected emotional states

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 2 — Exploitation of Vulnerable Groups

Article 5(1)(b)

The Prohibition

AI systems that exploit any of the vulnerabilities of a specific group of persons due to their age, disability, or specific social or economic situation, in a way that is likely to cause those persons or third parties harm by distorting their behaviour.

Self-Assessment Questions

#QuestionYesNoUnsure
2.1Does the system target or disproportionately reach children, elderly persons, or persons with cognitive disabilities?
2.2Does the system use age-specific or disability-specific psychological techniques to influence behaviour?
2.3Does the system target persons in economic difficulty with offers or recommendations that could cause harm?
2.4Has the system been tested for differential impact on vulnerable groups?
2.5Could a vulnerable person be harmed by acting on the system’s outputs or recommendations?

Examples of Prohibited Conduct

  • AI promoting high-interest loans to persons identified as financially distressed
  • Chatbots targeting minors using age-specific engagement techniques to drive purchases
  • Gaming AI exploiting addiction patterns identified in behavioural data
  • AI targeting persons with gambling addiction with personalised gambling content
  • Your system serves or will foreseeably be used by minors or elderly persons
  • Your system makes financial product recommendations without assessing user vulnerability
  • Your system operates in social care, mental health support, or financial advice contexts

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 3 — Social Scoring by Public Authorities

Article 5(1)(c)

The Prohibition

AI systems used by public authorities (or on their behalf) for the evaluation or classification of natural persons or groups over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment that is either:

  • unrelated to the social context in which the data was generated, or
  • unjustified or disproportionate relative to the social behaviour.

Self-Assessment Questions

#QuestionYesNoUnsure
3.1Is this system used by or on behalf of a public authority?
3.2Does the system aggregate individual behaviour data over time to create a score or classification of persons?
3.3Are decisions or differential treatments made based on that score/classification?
3.4Could the treatment resulting from the score affect persons in a different social context from where data was collected?
3.5Is the potential harm disproportionate to the social behaviour it purports to reflect?

Examples of Prohibited Conduct

  • A national government deploying a “citizen score” that affects access to public services based on social media activity
  • Tax authority systems that use predicted lifestyle scores to determine audit likelihood and deny benefits
  • Immigration services using algorithmic social scores generated from third-country behaviour data
  • Your client is a public authority and your system produces any kind of person-level scoring or ranking
  • Your system is used in public service eligibility decisions
  • Your system produces outputs that could feed into governmental decision-making about individuals

Determination

☐ Practice does NOT apply to this system (not a public authority context) ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 4 — Real-Time Remote Biometric Identification in Public Spaces

Article 5(1)(d)

The Prohibition

The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in specific, narrowly defined circumstances:

Permitted exceptions (all conditions must be met):

  1. Targeted search for specific crime victims (missing children, trafficking victims)
  2. Prevention of specific imminent threat to life or physical safety or terrorist attack
  3. Detection, identification, or prosecution of perpetrators of specific serious criminal offences (life sentence crimes)

Even permitted use requires: Prior judicial or independent administrative authorisation (except in urgent cases, where retrospective authorisation is sought promptly).

Self-Assessment Questions

#QuestionYesNoUnsure
4.1Does the system perform biometric identification (matching against a database)?
4.2Is identification performed in real-time (not post-hoc on stored footage)?
4.3Is the system used or intended to be used in publicly accessible spaces?
4.4Is the system used or intended to be used for law enforcement purposes?
4.5If exceptions apply, has prior judicial/administrative authorisation been obtained?

Examples of Prohibited Conduct

  • Deploying facial recognition cameras in a city centre to identify persons of interest from a criminal watchlist (without exception authorisation)
  • Real-time matching of faces at transport hubs against a general database
  • Using real-time biometric ID for general crime deterrence without specific threat
  • Your system involves any biometric processing in publicly accessible spaces
  • Your client is a law enforcement agency
  • Your system combines CCTV analysis with face database matching even with time delay
  • Your system operates at borders, airports, or other high-footfall public infrastructure

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 5 — AI-Generated or Manipulated “Deepfake” Biometric Data to Evade Identity Checks

Article 5(1)(e)

The Prohibition

AI systems specifically designed to generate or manipulate image, audio, or video content that features the likeness or voice of persons for the purpose of deceiving persons or automated systems to circumvent identity verification systems or other security mechanisms.

Self-Assessment Questions

#QuestionYesNoUnsure
5.1Does the system generate synthetic face, voice, or body representations of real persons?
5.2Could the system’s outputs be used to deceive face recognition, voice authentication, or liveness detection systems?
5.3Has the system been designed to bypass automated security checks?
5.4Are there contractual or technical guardrails preventing use of the system for identity fraud?
5.5Has foreseeable misuse for identity deception been assessed and mitigated?

Examples of Prohibited Conduct

  • A deepfake video generation tool specifically marketed for bypassing KYC (Know Your Customer) verification
  • Voice cloning tools designed to deceive voice authentication systems
  • Face-swap tools marketed to help users pass facial liveness detection checks

Note: Legitimate use cases for synthetic media (entertainment, accessibility, privacy protection) are not prohibited by this article — the prohibition targets the specific purpose of circumventing identity/security checks.

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 6 — Emotion Recognition in Workplace and Education

Article 5(1)(f)

The Prohibition

AI systems that infer the emotions of natural persons in the workplace or in educational institutions, except where the AI system is intended for medical or safety reasons (e.g., detecting drowsiness in vehicle operators).

Self-Assessment Questions

#QuestionYesNoUnsure
6.1Does the system attempt to detect, infer, or classify emotional states of persons?
6.2Is the system used or intended to be used in a workplace setting?
6.3Is the system used or intended to be used in an educational institution?
6.4If emotion recognition is used, is it solely for a documented medical or safety reason?
6.5If safety use, is it limited to that specific safety function (e.g., fatigue detection only)?

Examples of Prohibited Conduct

  • HR software that uses facial expression analysis to assess employee engagement or satisfaction
  • Examination proctoring software that flags emotional states as indicators of cheating
  • Call centre software that uses vocal analysis to infer agent emotional states for performance management

Examples of Permitted Use (Safety Exception)

  • Driver monitoring systems in commercial vehicles detecting drowsiness (medical/safety purpose)
  • Industrial machinery operator monitoring for fatigue in safety-critical roles
  • Your system processes video or audio data of employees or students
  • Your system uses “engagement” or “attention” scoring in workplace or educational contexts
  • Your sentiment analysis tools are used in employment or educational assessment decisions

Determination

☐ Practice does NOT apply to this system ☐ Practice is for medical/safety reason only — exception applies (document below) ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Medical/Safety Exception Documentation (if applicable):

[Document the specific safety/medical purpose, how the exception is limited in scope,
and how use is controlled to prevent expansion beyond the excepted purpose]

Prohibited Practice 7 — Biometric Categorisation Based on Sensitive Characteristics

Article 5(1)(g)

The Prohibition

AI systems that categorise natural persons based on their biometric data in order to deduce or infer sensitive personal characteristics such as:

  • Race or ethnic origin
  • Political opinions
  • Trade union membership
  • Religious or philosophical beliefs
  • Sexual orientation or sex life

Self-Assessment Questions

#QuestionYesNoUnsure
7.1Does the system process biometric data (facial features, gait, voice, fingerprints, etc.)?
7.2Does the system infer or predict any sensitive characteristic from biometric data?
7.3Is the categorisation used to make decisions about individuals based on those inferred characteristics?
7.4Even if not the primary purpose, could the system’s outputs reveal sensitive characteristics as a by-product?
7.5Have all model outputs been tested for inference of sensitive characteristics?

Examples of Prohibited Conduct

  • Facial analysis systems claiming to predict sexual orientation from facial geometry
  • AI systems inferring political views from face images for targeted political messaging
  • Race or ethnicity inference from facial or voice biometric data used in access control decisions

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Prohibited Practice 8 — Untargeted Facial Image Scraping

Article 5(1)(h)

The Prohibition

The creation or expansion of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Self-Assessment Questions

#QuestionYesNoUnsure
8.1Does the system scrape or collect facial images from internet sources?
8.2Does the system collect facial images from CCTV or public camera footage?
8.3Are collected images used to build or expand a facial recognition database?
8.4Is the collection targeted (specific named individuals with lawful basis) or untargeted (bulk collection)?
8.5Do data subjects provide explicit consent for facial image collection and database use?

Examples of Prohibited Conduct

  • Automated web crawlers collecting social media profile photos to train facial recognition models
  • Systems harvesting CCTV footage to extract and catalogue faces without individual consent
  • Creating stock facial recognition training datasets from public internet images

Determination

☐ Practice does NOT apply to this system ☐ UNCERTAIN — escalating to legal counsel on ____-__-__ ☐ Practice MAY apply — STOP — do not proceed without legal clearance

Evidence / Rationale:

[Document your reasoning and supporting evidence]

Overall Assessment Summary

Prohibited PracticeArticleDetermination
1. Subliminal/Manipulative Techniques5(1)(a)☐ Does not apply ☐ Uncertain ☐ May apply
2. Exploitation of Vulnerable Groups5(1)(b)☐ Does not apply ☐ Uncertain ☐ May apply
3. Social Scoring by Public Authorities5(1)(c)☐ Does not apply ☐ Uncertain ☐ May apply
4. Real-Time Biometric ID in Public Spaces5(1)(d)☐ Does not apply ☐ Uncertain ☐ May apply
5. Identity Deception via Biometric Synthesis5(1)(e)☐ Does not apply ☐ Uncertain ☐ May apply
6. Emotion Recognition (Workplace/Education)5(1)(f)☐ Does not apply ☐ Uncertain ☐ May apply
7. Biometric Categorisation (Sensitive Chars.)5(1)(g)☐ Does not apply ☐ Uncertain ☐ May apply
8. Untargeted Facial Image Scraping5(1)(h)☐ Does not apply ☐ Uncertain ☐ May apply

Overall Outcome:CLEAR — No prohibited practices identified. Proceed with high-risk classification and conformity assessment. ☐ UNCERTAIN — One or more items require legal clarification before proceeding. ☐ PROHIBITED — One or more prohibited practices identified. The AI system or specific use case must be discontinued or fundamentally redesigned.


What to Do If a Prohibited Practice Is Detected

Immediate Actions (within 24 hours)

  1. STOP deployment and development activities that involve the prohibited practice
  2. Notify the project lead and senior management immediately
  3. Document the finding in writing with timestamp
  4. Preserve evidence of the prohibited practice (do not delete logs or code)
  5. Engage legal counsel immediately

Short-Term Actions (within 72 hours)

  1. Assess whether any data subjects have already been affected
  2. Consider whether data protection authority notification is required (GDPR Art. 33/34)
  3. Identify scope — is the prohibition in the core design, or in a specific use case?
  4. Determine whether the system can be redesigned to eliminate the prohibited practice
  5. If system has been placed on the EU market, assess market withdrawal obligations

Escalation Matrix

SituationWho to NotifyTimeline
Potential prohibited practice identifiedLegal counsel, DPOImmediately
Confirmed prohibited practiceSenior management, Legal, DPOWithin 24 hours
Prohibited system placed on marketNational Market Surveillance AuthorityAs required by law
Data subjects harmedDPA (if GDPR breach involved), Legal72 hours (GDPR)
Employees aware of prohibited useHR, Legal, Whistleblower processImmediately

Redesign Options

If a prohibited practice is identified but the underlying business need is legitimate, consider these alternatives:

Prohibited ApproachPossible Lawful Alternative
Subliminal manipulationTransparent persuasion with disclosure; A/B testing without exploitative dark patterns
Vulnerable group targetingGeneral service with accessibility features; explicit safeguards for vulnerable users
Social scoringTransparent, purpose-limited creditworthiness scoring with GDPR lawful basis
Real-time biometric IDPost-hoc analysis with judicial authorisation; alternative identification methods
Biometric synthesis for fraudImproved liveness detection; multi-factor authentication
Workplace emotion recognitionVoluntary wellbeing surveys; aggregate team analytics without individual profiling
Sensitive biometric categorisationRemove biometric inference; use declared/consented attributes only
Facial image scrapingLicensed training datasets; synthetic data generation; consensual data collection

Sign-Off

RoleNameSignatureDate
Assessor__________________________________-__-__
Legal Counsel / DPO__________________________________-__-__
Authorised Signatory__________________________________-__-__

Next scheduled review: ____-__-__


Template provided by VORLUX AI | vorluxai.com | This is guidance only, not legal advice.


Versión Española

Lista de prácticas prohibidas para IA — Artículo 5 del Reglamento UE sobre la Inteligencia Artificial > Aviso: Esta es una guía solo, no asesoramiento legal. Consulte a un abogado calificado para sus obligaciones de cumplimiento específicas. Operar un sistema de IA prohibido conlleva penas significativas según el Reglamento UE sobre la Inteligencia Artificial, incluidas multas de hasta EUR 35.000.000 o 7% del total anual de ingresos mundiales.

Plantilla proporcionada por VORLUX AI | vorluxai.com --- ## Propósito y alcance El artículo 5 del Reglamento UE sobre la Inteligencia Artificial prohíbe ciertas prácticas de IA en firme — no pueden ser autorizadas, eximidas o justificadas por proporcionalidad. No existe ningún procedimiento de evaluación de conformidad que las haga legales. Debe completarse esta lista:

  • Antes de desarrollar o implementar cualquier sistema de IA
  • Cuando el propósito o capacidades previstas de un sistema existente cambien
  • Cuando se identifique una nueva aplicación para un sistema de IA existente
  • Anualmente como parte de la revisión continua de cumplimiento | Campo | Valor | |-------|-------| | Nombre del sistema / caso de uso | ___________________________ | | Fecha de evaluación | ____-__-__ | | Nombre del evaluador | ___________________________ | | Rol del evaluador | ___________________________ | | Revisado por (Legal) | ___________________________ | | Fecha de revisión | ____-__-__ | --- ## Cómo completar esta lista Para cada una de las 8 prácticas prohibidas:
  1. Lee la descripción de la prohibición con cuidado
  2. Responde todas las preguntas de autoevaluación honestamente
  3. Revisa los casos límite — si alguno se aplica a su sistema, busque asesoramiento legal antes de proceder
  4. Registra tu determinación: ☐ La práctica NO se aplica | ☐ INCERTIDUMBRE — escalada | ☐ La práctica PUEDE aplicarse — DETENER Cualquier “INCERTIDUMBRE” o “PUEDE aplicarse” debe ser elevado a un asesor legal antes de que el sistema sea desarrollado, implementado o continuado en operación.

Práctica prohibida 1 — Técnicas subliminales o manipulativas Artículo 5(1)(a) ### La Prohibición Los sistemas de IA que despliegan técnicas subliminales más allá del nivel de conciencia de una persona, o explotan deliberadamente debilidades psicológicas o vulnerabilidades de individuos o grupos específicos, de manera que es probable que cause daño a esas personas al distorsionar su comportamiento.

Preguntas de autoevaluación | # | Pregunta | Sí | No | Desconocido | |---|----------|-----|----|--------| | 1.1 | El sistema presenta información o estímulos diseñados para operar por debajo del nivel de conciencia consciente? | ☐ | ☐ | ☐ | | 1.2 | El sistema utiliza perfilado psicológico para identificar y explotar debilidades individuales

Actividad de medios - Sistemas de autoridades fiscales que utilizan puntuaciones de estilo de vida predichas para determinar la probabilidad de auditoría y denegar beneficios - Servicios migratorios que utilizan puntuaciones sociales generadas algorítmicamente a partir de datos de comportamiento de terceros países

Casos límite — Busque asesoramiento legal si:

  • Su cliente es una autoridad pública y su sistema produce cualquier tipo de calificación o clasificación de personas
  • Su sistema se utiliza en decisiones de elegibilidad para servicios públicos
  • Su sistema produce salidas que podrían alimentar las decisiones gubernamentales sobre individuos

Determinación ☐ La práctica NO aplica a este sistema (no es un contexto de autoridad pública) ☐ INCERTIDUMBRE — escalando a asesoramiento legal en ____-__-__ ☐ La práctica PUEDE aplicar — DETÉNGASE — no continúe sin autorización legal

Evidencia / Razonamiento: [Documente su razonamiento y evidencia de apoyo]

Práctica Prohibida 4 - Identificación biométrica remota en tiempo real en espacios públicos

Artículo 5(1)(d)

La prohibición

El uso de sistemas de identificación biométrica remota en tiempo real en espacios accesibles al público para fines de aplicación de la ley, excepto en circunstancias específicas y estrechamente definidas:

Exceptos permitidos (toda condición debe cumplirse):

  1. Búsqueda dirigida de víctimas de delitos específicos (niños desaparecidos, víctimas de tráfico)
  2. Prevención de amenazas específicas inminentes a la vida o seguridad física o ataque terrorista
  3. Deteción, identificación o persecución de perpetradores de delitos criminales graves específicos (delitos con sentencia de cadena perpetua)

Incluso el uso permitido requiere:

Autorización judicial o administrativa independiente previa (excepto en casos urgentes, donde se solicita autorización retrospectiva pronto).

Preguntas de autoevaluación

#PreguntaNoDesconocido
4.1¿El sistema realiza identificación biométrica (coincidencia con una base de datos)?
4.2¿La identificación se realiza en tiempo real (no post-hoc en grabaciones almacenadas)?
4.3¿El sistema se utiliza o está destinado a utilizarse en espacios accesibles al público?
4.4¿El sistema se utiliza o está destinado a utilizarse para fines de aplicación de la ley?
4.5Si aplican excepciones, se ha obtenido autorización judicial/administrativa previa?

Ejemplos de conducta prohibida

  • Desplegar cámaras de reconocimiento facial en un centro comercial para identificar personas de interés de una lista de vigilancia criminal (sin autorización de excepción)
  • Coincidencia en tiempo real de caras en puntos de transporte contra una base de datos

Sistemas que clasifican a personas naturales basadas en sus datos biométricos con el fin de deducir o inferir características personales sensibles como:

  • Raza o origen étnico
  • Opiniones políticas
  • Afiliación sindical
  • Creencias religiosas o filosóficas
  • Orientación sexual o vida sexual

Preguntas de Autoevaluación | # | Pregunta | Sí | No | Incierto | |---|----------|-----|----|--------| | 7.1 | ¿El sistema procesa datos biométricos (características faciales, gait, voz, huellas dactilares, etc.)? | ☐ | ☐ | ☐ | | 7.2 | ¿El sistema infiere o predice alguna característica sensible a partir de datos biométricos? | ☐ | ☐ | ☐ | | 7.3 | ¿La clasificación se utiliza para tomar decisiones sobre individuos basadas en aquellas características inferidas? | ☐ | ☐ | ☐ | | 7.4 | Incluso si no es el propósito principal, podría revelar características sensibles como subproducto de los resultados del sistema? | ☐ | ☐ | ☐ | | 7.5 | Se han probado todos los resultados del modelo para la inferencia de características sensibles? | ☐ | ☐ | ☐ |

Ejemplos de Conducta Prohibida

  • Sistemas de análisis facial que pretenden predecir la orientación sexual a partir de geometría facial
  • Sistemas AI que infieren opiniones políticas a partir de imágenes faciales para mensajería política dirigida
  • Inferencia de raza o etnia a partir de datos biométricos faciales o vocales utilizados en decisiones de control de acceso

Práctica Prohibida 8 — Scraping de imágenes faciales no dirigidas

Artículo 5(1)(h)

La prohibición

La creación o expansión de bases de datos de reconocimiento facial a través del scraping no dirigido de imágenes faciales desde Internet o CCTV.

Ejemplos de Conducta Prohibida

  • Crawlers web automatizados que recopilan
EU AI Act: 99 días para el deadline

15 minutos para evaluar su caso

Consultoría inicial sin compromiso. Analizamos su infraestructura y le recomendamos la arquitectura híbrida óptima.

Sin compromiso 15 minutos Propuesta personalizada

136 páginas de recursos gratuitos · 26 plantillas de compliance · 22 dispositivos certificados