Publication date: May 23, 2025
The ability to create and implement new ideas is one of the most important elements that helps companies stand out on the market and achieve success. Small and medium-sized enterprises (SMEs) play a particularly important role in the development of the economy. They are the ones that often employ people in their local communities, quickly adapt to changes and are the source of many modern solutions. According to the definition in Commission Recommendation 2003/361, the SME category includes enterprises with fewer than 250 employees, whose annual turnover does not exceed EUR 50 million or whose annual balance sheet total does not exceed EUR 43 million. Within this category, a distinction is made between microenterprises (up to 10 employees), small enterprises (up to 50 employees) and medium-sized enterprises (up to 250 employees).
In order to support innovation in SMEs and other economic entities, the European Union introduces various measures and instruments supporting technological development, market access and compliance with legal regulations. One of the most innovative tools in this area are regulatory sandboxes, which allow for testing innovative solutions, including systems based on artificial intelligence, in controlled conditions, while maintaining compliance with applicable regulations and protection of fundamental rights. The issue of measures supporting innovation, with particular emphasis on SMEs, including start- ups is regulated in Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on laying down harmonised rules on artificial intelligence.
REGULATORY SANDBOXES
Artificial Intelligence (AI) regulatory sandboxes, as set out in Article 57 of Regulation (EU) 2024/1689 of the European Parliament and of the Council, are a new mechanism to support the safe and compliant development of innovative AI systems in the European Union. Their aim is to create a controlled environment conducive to testing, training, validating and preparing AI systems for market launch, while respecting fundamental human rights, health and safety. Such testing takes place for a limited period of time before AI systems are placed on the market or put into service in accordance with a specific sandbox roadmap. In accordance with Article 57(5), such sandboxes may include real-world testing supervised by the competent authorities within the sandbox. Those authorities provide participants with support, supervision and guidance on compliance with applicable rules, including the identification of risks and the effectiveness of measures to mitigate those risks.
According to the Regulation, each Member State is required to establish at least one regulatory sandbox by 2 August 2026. This obligation can be fulfilled by participating in an existing sandbox, provided that it ensures equivalent national coverage. It is also possible to create sandboxes together with other Member States. Sandboxes can be created not only at national level, but also at regional or local level. In the case of institutions, bodies and offices of the Union, the European Data Protection Supervisor can establish its own sandbox. Member States are required to ensure adequate resources and cooperation between the authorities supervising the different aspects of the operation of sandboxes. The European Commission can support Member States by providing advice, technical assistance and tools enabling the effective functioning of sandboxes.
Competent authorities provide, where appropriate, guidance, supervision and support within the AI regulatory sandbox, with the aim of identifying risks. In addition, competent authorities are required to provide sandbox participants with guidance on the requirements of the regulation and, upon request, provide written confirmation of the activities carried out. Final participation reports can be used in conformity assessment or market surveillance procedures. With the consent of the provider and the competent authority, these documents can be made available to the European Commission, the AI Council or even the public via the Single Information Platform.
The sandboxes created are intended to contribute to the following goals (paragraph 9):
In situations where AI systems tested in the sandbox require the processing of personal data or fall under the competence of other supervisory authorities, it is necessary to involve these authorities in the activities of the AI regulatory sandbox and to involve them in the supervision of these aspects resulting from their respective tasks and mandates. At the same time, competent national authorities retain the right to temporarily suspend or terminate testing in the event of a significant risk to health, safety or fundamental rights that cannot be effectively mitigated. They must inform the AI Office of their decision.
According to the Regulation, sandboxes should be designed to support cross-border cooperation and enable coordination between competent authorities within the AI Board. Information on the sandboxes established should be communicated to the AI Authority and the AI Board. These authorities maintain a public list of sandboxes and may provide support and guidance to Member States. Competent national authorities are required to submit annual reports on the activities of the sandboxes to the AI Authority and the AI Board and publish them (or summaries thereof) online. These reports shall include information on the progress and results of the implementation of the sandboxes, including best practices, incidents, lessons learned and recommendations on the establishment of regulatory sandboxes and, where appropriate, recommendations on the application and possible review of this Regulation. In order to increase transparency and accessibility, the European Commission is also to create a single and dedicated digital interface containing all relevant information on AI regulatory sandboxes. It will enable stakeholders to interact with relevant authorities and obtain guidance on the compliance of new AI-based products, services and business models.
Participation in the regulatory sandbox does not exempt providers from civil liability for damages caused to third parties as a result of experiments conducted in the sandbox. However, as long as potential participants act in good faith, comply with the specific action plan and conditions of participation, they are not subject to administrative fines for possible violations of the regulation.
In order to avoid fragmentation of sandbox systems in the EU, Article 58 of the Regulation provides for the adoption by the Commission of implementing acts laying down detailed rules for their operation. These acts are to set out common guidelines on, among others, the eligibility of participants, application procedures, rules for participation, supervision, exiting the sandbox and conditions for its termination. These rules are also to ensure that:
Where appropriate, potential suppliers in AI regulatory sandboxes are directed to pre-implementation services and other value-added services – such as assistance with standardisation and certification documents, test and experiment centres, European Digital Innovation Centres and Centres of Excellence.
In addition, pursuant to Article 60 of the Regulation, the Commission shall specify, by means of implementing acts, the detailed elements of the plan for testing high-risk AI systems in real-world conditions outside AI regulatory sandboxes. Real-world testing of high-risk AI systems outside AI regulatory sandboxes may be carried out by providers or potential providers of high-risk AI systems listed in Annex III of the Regulation. Such systems include AI systems from the following areas:
These tests can be conducted at any time before the AI system is released to market or put into service.
Suppliers may only conduct tests of high-risk AI systems in real-world conditions if they meet all the requirements of Article 60(4) of the Regulation:
The consent of participants in tests under real-world conditions should be informed. According to the provisions of Article 61, this means that it must be voluntary. The formal requirements of such consent include dated and documented. The test participant or his legal representative receives a copy. In addition, participants must be duly informed in a concise, clear, adequate and understandable manner about:
Real-world test participants, or, where applicable, their legally designated representative, may – without consequences and without having to provide any justification – decide to withdraw from the tests at any time by revoking their informed consent. In addition, they may also request the immediate and permanent deletion of their personal data. Revoking informed consent does not affect activities already carried out. Suppliers or potential suppliers must notify the national market surveillance authority in the Member State of the suspension or termination of these tests and of the final results. These rules aim to ensure safe and responsible testing of AI systems while protecting the rights of individuals and the public interest.
Further measures for providers and adopters, in particular SMEs, including start- ups , are listed in Article 62 of the Regulation. According to this provision, Member States are required to ensure that SMEs, including start- ups , that have their registered office or shareholding in the Union, have priority access to AI regulatory sandboxes. These companies must comply with the eligibility conditions and selection criteria. However, it should be noted that priority access does not exclude access to the Scope A regulatory sandbox for other SMEs. Furthermore, Member States must organise dedicated information and training events on the application of the provisions of this Regulation adapted to the needs of SMEs. They are also required to use existing dedicated channels and, where appropriate, establish new channels of communication with SMEs and local public authorities. This connection aims to provide advice and answer questions on the implementation of this Regulation.
It is also worth mentioning that the AI Office also has certain obligations. These include providing standardised templates covered by Regulation 2024/1689, in line with the specificities defined by the AI Council in its proposal. The AI Office also develops and operates a single information platform providing all operators throughout the Union with relevant information on the Regulation and organises appropriate information campaigns to raise awareness of the obligations. Furthermore, the AI Office assesses and promotes convergence of best practices in public procurement procedures for AI systems.
In summary, supporting innovation, especially in small and medium-sized companies, is very important for the development of a modern economy. By creating regulatory sandboxes, the European Union shows that it cares about the safe and responsible introduction of new technologies, such as artificial intelligence. Thanks to sandboxes, companies can test their ideas in real conditions, while having the support and control of appropriate institutions. This is a good solution, especially for small companies and start- ups , which often have interesting ideas but fewer resources to implement them. Such activities help not only develop new technologies, but also build social trust and support cooperation between companies and offices. If everything works well, Europe can become a strong and modern center of innovation.