
The SME AI conformity Tool is now available
10/03/2025
2 minutes
The SME assessment tool for AI requirements, developed by SBS and DIGITAL SME navigates users through a structured series of questions, categorizing AI systems into the four risk levels identified by the AI Act.
Each question addresses the AI system’s functionality, application domain, and potential impact. Upon completing the assessment, users will receive a report containing valuable resources, information on their system’s risk level, and advice on how to improve their tools.
Assess the conformity of your AI system with the AI Act here: AI Act Conformity Tool – European DIGITAL SME Alliance
The EU AI Act (“AI Act”) aims to ensure that Europeans can trust the benefits AI offers. While many AI systems present little to no risk and have the potential to address significant societal challenges, some systems pose risks that need to be managed to prevent undesirable outcomes.
The AI Act categorizes AI systems into four risk levels:
- Unacceptable risk: Systems banned outright, such as government social scoring or toys promoting dangerous behaviour.
- High risk: Systems in critical areas (e.g., healthcare, transport, justice) subject to strict requirements, including risk assessment, high-quality data, human oversight, and transparency.
- Limited risk: Systems with transparency obligations to inform users when interacting with AI (e.g., chatbots, deepfakes).
- Minimal or no risk: Systems freely usable, such as video games or spam filters.
Each category entails different legal requirements, which vary according to the stakeholder’s role in the AI value chain.
AI systems on the market are subject to strict oversight: authorities conduct market surveillance, deployers ensure human oversight and monitoring, and providers maintain post-market monitoring and report incidents.
For general-purpose AI models, the AI Act introduces transparency obligations and additional risk management measures for highly capable models. These include self-assessment, systemic risk mitigation, incident reporting, model testing, evaluations, and cybersecurity requirements to ensure trustworthy use.
The AI Act entered into force on August 1 2024, and will be fully applicable 2 years later, with some exceptions: prohibitions will take effect after six months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months and the rules for AI systems – embedded into regulated products – will apply after 36 months. To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation and invites AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
Annick Roetynck
Annick is the Manager of LEVA-EU, with decades of experience in two-wheeled and light electric mobility.
Campaign success
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Member profile
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.