KG LEGAL \ INFO
BLOG

MEASURES SUPPORTING INNOVATION – Regulatory sandboxes. Testing innovative solutions, including systems based on artificial intelligence, in controlled conditions.

Publication date: May 23, 2025

The ability to create and implement new ideas is one of the most important elements that helps companies stand out on the market and achieve success. Small and medium-sized enterprises (SMEs) play a particularly important role in the development of the economy. They are the ones that often employ people in their local communities, quickly adapt to changes and are the source of many modern solutions. According to the definition in Commission Recommendation 2003/361, the SME category includes enterprises with fewer than 250 employees, whose annual turnover does not exceed EUR 50 million or whose annual balance sheet total does not exceed EUR 43 million. Within this category, a distinction is made between microenterprises (up to 10 employees), small enterprises (up to 50 employees) and medium-sized enterprises (up to 250 employees).

In order to support innovation in SMEs and other economic entities, the European Union introduces various measures and instruments supporting technological development, market access and compliance with legal regulations. One of the most innovative tools in this area are regulatory sandboxes, which allow for testing innovative solutions, including systems based on artificial intelligence, in controlled conditions, while maintaining compliance with applicable regulations and protection of fundamental rights. The issue of measures supporting innovation, with particular emphasis on SMEs, including start- ups is regulated in Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on laying down harmonised rules on artificial intelligence.

REGULATORY SANDBOXES

Artificial Intelligence (AI) regulatory sandboxes, as set out in Article 57 of Regulation (EU) 2024/1689 of the European Parliament and of the Council, are a new mechanism to support the safe and compliant development of innovative AI systems in the European Union. Their aim is to create a controlled environment conducive to testing, training, validating and preparing AI systems for market launch, while respecting fundamental human rights, health and safety. Such testing takes place for a limited period of time before AI systems are placed on the market or put into service in accordance with a specific sandbox roadmap. In accordance with Article 57(5), such sandboxes may include real-world testing supervised by the competent authorities within the sandbox. Those authorities provide participants with support, supervision and guidance on compliance with applicable rules, including the identification of risks and the effectiveness of measures to mitigate those risks.

According to the Regulation, each Member State is required to establish at least one regulatory sandbox by 2 August 2026. This obligation can be fulfilled by participating in an existing sandbox, provided that it ensures equivalent national coverage. It is also possible to create sandboxes together with other Member States. Sandboxes can be created not only at national level, but also at regional or local level. In the case of institutions, bodies and offices of the Union, the European Data Protection Supervisor can establish its own sandbox. Member States are required to ensure adequate resources and cooperation between the authorities supervising the different aspects of the operation of sandboxes. The European Commission can support Member States by providing advice, technical assistance and tools enabling the effective functioning of sandboxes.

Competent authorities provide, where appropriate, guidance, supervision and support within the AI regulatory sandbox, with the aim of identifying risks. In addition, competent authorities are required to provide sandbox participants with guidance on the requirements of the regulation and, upon request, provide written confirmation of the activities carried out. Final participation reports can be used in conformity assessment or market surveillance procedures. With the consent of the provider and the competent authority, these documents can be made available to the European Commission, the AI Council or even the public via the Single Information Platform.

The sandboxes created are intended to contribute to the following goals (paragraph 9):

  1. increasing legal certainty,
  2. supporting the exchange of best practices through cooperation with authorities participating in the AI regulatory sandbox,
  3. strengthening innovation and competitiveness and facilitating the development of the AI ecosystem,
  4. contributing to evidence-based learning for regulatory actions,
  5. accelerating access to the EU market for innovative AI systems, in particular when provided by SMEs, including start-ups.

In situations where AI systems tested in the sandbox require the processing of personal data or fall under the competence of other supervisory authorities, it is necessary to involve these authorities in the activities of the AI regulatory sandbox and to involve them in the supervision of these aspects resulting from their respective tasks and mandates. At the same time, competent national authorities retain the right to temporarily suspend or terminate testing in the event of a significant risk to health, safety or fundamental rights that cannot be effectively mitigated. They must inform the AI Office of their decision.

According to the Regulation, sandboxes should be designed to support cross-border cooperation and enable coordination between competent authorities within the AI Board. Information on the sandboxes established should be communicated to the AI Authority and the AI Board. These authorities maintain a public list of sandboxes and may provide support and guidance to Member States. Competent national authorities are required to submit annual reports on the activities of the sandboxes to the AI Authority and the AI Board and publish them (or summaries thereof) online. These reports shall include information on the progress and results of the implementation of the sandboxes, including best practices, incidents, lessons learned and recommendations on the establishment of regulatory sandboxes and, where appropriate, recommendations on the application and possible review of this Regulation. In order to increase transparency and accessibility, the European Commission is also to create a single and dedicated digital interface containing all relevant information on AI regulatory sandboxes. It will enable stakeholders to interact with relevant authorities and obtain guidance on the compliance of new AI-based products, services and business models.

Participation in the regulatory sandbox does not exempt providers from civil liability for damages caused to third parties as a result of experiments conducted in the sandbox. However, as long as potential participants act in good faith, comply with the specific action plan and conditions of participation, they are not subject to administrative fines for possible violations of the regulation.

In order to avoid fragmentation of sandbox systems in the EU, Article 58 of the Regulation provides for the adoption by the Commission of implementing acts laying down detailed rules for their operation. These acts are to set out common guidelines on, among others, the eligibility of participants, application procedures, rules for participation, supervision, exiting the sandbox and conditions for its termination. These rules are also to ensure that:

  • Sandboxes will be open to any supplier or potential supplier meeting clearly defined and unbiased criteria, with decisions made within three months;
  • broad and equal access to participation will be ensured, also in the form of cooperation with other entities;
  • sandboxes will support Member States’ freedom in creating and managing them;
  • the participation of SMEs, including start- ups , will be free of charge, except in cases of justified costs
  • Sandbox activities will support compliance with the requirements of the Regulation, including compliance assessment and application of codes of conduct;
  • the possibility of cooperation with other actors of the AI ecosystem will be ensured – from research laboratories, through standardization organizations, to centers of excellence;
  • the procedures will be simple, transparent and harmonised across the EU to facilitate access for SMEs and ensure mutual recognition of participation in sandboxes established in different countries;
  • participation in the sandbox will be time-limited appropriate to the scale of the project, with the possibility of extension;
  • Sandboxes will support the development of tools to test, analyze and explain the properties of AI systems (such as accuracy, resilience, cybersecurity ) and tools to mitigate risks to fundamental rights and society.

Where appropriate, potential suppliers in AI regulatory sandboxes are directed to pre-implementation services and other value-added services – such as assistance with standardisation and certification documents, test and experiment centres, European Digital Innovation Centres and Centres of Excellence.

In addition, pursuant to Article 60 of the Regulation, the Commission shall specify, by means of implementing acts, the detailed elements of the plan for testing high-risk AI systems in real-world conditions outside AI regulatory sandboxes. Real-world testing of high-risk AI systems outside AI regulatory sandboxes may be carried out by providers or potential providers of high-risk AI systems listed in Annex III of the Regulation. Such systems include AI systems from the following areas:

  • Biometrics,
  • Critical infrastructure,
  • Vocational education and training,
  • Employment, employee management and access to self-employment,
  • Access to and use of basic private services and basic public services and benefits,
  • Prosecution of criminal offences, to the extent that the use of the systems in question is permitted under relevant Union or national law,
  • Migration, asylum and border control management, to the extent that the use of these systems is permitted under relevant Union or national law,
  • Administration of justice and democratic processes.

These tests can be conducted at any time before the AI system is released to market or put into service.

Suppliers may only conduct tests of high-risk AI systems in real-world conditions if they meet all the requirements of Article 60(4) of the Regulation:

  1. The supplier or potential supplier is required to draw up a test plan and submit it to the relevant market surveillance authority in the Member State concerned;
  2. Tests must be approved by the market surveillance authority; if the authority does not respond within 30 days, the tests are deemed to be approved (unless national law does not provide for this);
  3. Tests must be registered and given an EU identification number, except for certain high-risk systems that are subject to registration in non-public databases;
  4. The supplier must be established in the EU or appoint a legal representative there;
  5. Test data may be transferred to third countries only if appropriate legal safeguards are in place;
  6. The duration of the tests is no longer than 6 months , with the possibility of extension once for another 6 months upon prior notification and justification;
  7. Protection of participants belonging to particularly vulnerable groups (e.g. children, people with disabilities) must be ensured;
  8. If tests are carried out with the participation of the user, the user must be fully informed and enter into an agreement with the supplier specifying the parties’ responsibilities;
  9. Participants must provide informed consent , unless the activity involves law enforcement activities where other safeguards must protect the rights of the participants;
  10. Tests must be conducted under constant supervision by competent persons.
  11. The decisions of an AI system must be able to be reversed or ignored.

The consent of participants in tests under real-world conditions should be informed. According to the provisions of Article 61, this means that it must be voluntary. The formal requirements of such consent include dated and documented. The test participant or his legal representative receives a copy. In addition, participants must be duly informed in a concise, clear, adequate and understandable manner about:

  1. the nature and objectives of real-world testing and any inconveniences that may be associated with participating in such testing;
  2. the conditions under which real-world testing is to be conducted, including the expected duration of the participant or participants’ participation in the testing;
  3. their rights and safeguards regarding participation in the tests, in particular the right to refuse to participate in the tests and the right to withdraw from the tests under real-world conditions – at any time, without consequences and without having to provide any justification;
  4. rules for requesting to reverse or ignore predictions, recommendations or decisions made by an AI system;
  5. the Union-wide unique identification number for real-world tests issued in accordance with point (c) of Article 60(4) and the contact details of the supplier or its legal representative from whom further information can be obtained.

Real-world test participants, or, where applicable, their legally designated representative, may – without consequences and without having to provide any justification – decide to withdraw from the tests at any time by revoking their informed consent. In addition, they may also request the immediate and permanent deletion of their personal data. Revoking informed consent does not affect activities already carried out. Suppliers or potential suppliers must notify the national market surveillance authority in the Member State of the suspension or termination of these tests and of the final results. These rules aim to ensure safe and responsible testing of AI systems while protecting the rights of individuals and the public interest.

Further measures for providers and adopters, in particular SMEs, including start- ups , are listed in Article 62 of the Regulation. According to this provision, Member States are required to ensure that SMEs, including start- ups , that have their registered office or shareholding in the Union, have priority access to AI regulatory sandboxes. These companies must comply with the eligibility conditions and selection criteria. However, it should be noted that priority access does not exclude access to the Scope A regulatory sandbox for other SMEs. Furthermore, Member States must organise dedicated information and training events on the application of the provisions of this Regulation adapted to the needs of SMEs. They are also required to use existing dedicated channels and, where appropriate, establish new channels of communication with SMEs and local public authorities. This connection aims to provide advice and answer questions on the implementation of this Regulation.

It is also worth mentioning that the AI Office also has certain obligations. These include providing standardised templates covered by Regulation 2024/1689, in line with the specificities defined by the AI Council in its proposal. The AI Office also develops and operates a single information platform providing all operators throughout the Union with relevant information on the Regulation and organises appropriate information campaigns to raise awareness of the obligations. Furthermore, the AI Office assesses and promotes convergence of best practices in public procurement procedures for AI systems.

In summary, supporting innovation, especially in small and medium-sized companies, is very important for the development of a modern economy. By creating regulatory sandboxes, the European Union shows that it cares about the safe and responsible introduction of new technologies, such as artificial intelligence. Thanks to sandboxes, companies can test their ideas in real conditions, while having the support and control of appropriate institutions. This is a good solution, especially for small companies and start- ups , which often have interesting ideas but fewer resources to implement them. Such activities help not only develop new technologies, but also build social trust and support cooperation between companies and offices. If everything works well, Europe can become a strong and modern center of innovation.

UP