KG LEGAL \ INFO
BLOG

The Impact of the AI Act on the Life Science Sector – New Regulations and Their Consequences

Publication date: March 05, 2025

The AI Act is a regulation adopted by the European Union regulating the use of artificial intelligence in various sectors of the economy, including health, finance and public administration. It is a groundbreaking legal act in this matter. The AI Act has a particular impact on the life science sector, as companies related to the medical industry are increasingly using artificial intelligence in biomedical research, diagnostics and the development of new therapies. The regulation introduces new regulations that entities related to the life science sector are required to implement by August 2026.

The regulation must be complied with by both EU and non-EU entities if they introduce or use AI systems on the EU market (Article 2 (c) AI Act ). Importantly, AI systems used solely for research, innovation, non-professional activities, national security, defence and military purposes are not subject to AI ACT regulations.

Regulatory context

Deadlines

  • August 1, 2024 – entry into force;
  • February 2, 2025 – the first regulations came into force, including those concerning prohibited AI practices;
  • August 2, 2025 – General Purpose AI regulations come into force;
  • August 2026 – the most important changes will come into effect – regarding high-risk AI systems and regulatory compliance.

Related regulations

The AI Act is a comprehensive legal act that does not operate in isolation from other international and national regulations. It is part of a legal system that includes, among others:

  • MDR, IVDR – regulations on the certification of medical devices; AI Act harmonizes requirements to avoid double certification of AI systems used in the life sciences sector;
  • GDPR – an act regulating the protection of personal data;
  • Digital Services Act and Data Governance Act (Data Management Act) – regulations regarding the processing and sharing of user data;
  • Data Act – an act defining the rules for the exchange of data between companies, public institutions and private research entities; it deals with the European health data space and the possibility of sharing data by medical entities to “teach” artificial intelligence within a specific legal framework.

Any AI system that operates in the life science sector must also comply with other regulations that apply to the field in which it operates. It fits into the European Union’s regulatory system, which includes data protection, civil liability, cybersecurity and sector-specific regulations.

Classification of systems and risk levels.

The AI Act classifies AI systems according to their level of risk. The division depends on the impact on health, safety and fundamental rights. Depending on the assessment of the risk category, providers and users will have different obligations and restrictions.

Three key risk categories:

  • prohibited AI practices – practices prohibited due to unacceptable level of interference with protected values;
  • high-risk AI systems – practices permitted when a number of conditions are met; their regulations constitute an essential part of the Act;
  • AI systems with limited risk ( limited risk ) – they only require informing the user that communication takes place with artificial intelligence and not with a human; designing models in a way that prevents the generation of illegal content and publishing summaries of copyrighted data that were used to train the models.

Prohibited AI practices (Art. 5 AI Act )

The AI Act contains a list of practices prohibited for entities implementing artificial intelligence systems. This list is not exhaustive, and other practices using AI practices may be excluded from use if they violate other EU and national regulations. The prohibitions listed in the Act include:

  • deliberate subliminal, manipulative or misleading techniques – influencing decisions outside the user’s awareness, e.g. through audio cues, deceptive interfaces, etc.;
  • exploiting a person’s weaknesses – exploiting characteristics such as age, disability, specific social or economic situation; covers only the categories of people listed in the AI Act , e.g. advertising content exploiting their vulnerability due to age;
  • scoring of citizens – the assessment or classification of individuals based on their behaviour or personality traits, leading to unfair treatment; monitoring and evaluating people over a period of time, the results of which may lead to unfair or unfavourable treatment; (does not apply to AI systems used solely for research purposes;
  • criminal profiling – assessing the risk of committing a crime solely on the basis of personal characteristics;
  • mass collection of facial images (untargeted scraping) – creating or expanding databases of images obtained from the Internet and CCTV;
  • analysis of emotions in workplaces and educational institutions – assessment and analysis of emotions in work and schools; the ban does not include AI systems that use emotion inference for medical and safety purposes, e.g. a system for monitoring pilot and driver fatigue, systems supporting people with autism/neurological problems;
  • Biometric categorization – classification of people based on biometric data to infer (often sensitive) information about e.g. their race, views, religion, sexual orientation;
  • remote biometric identification – prohibited in real time in public space, with exceptions (searching for victims of kidnapping, human trafficking; preventing terrorism; identifying suspects).

High Risk AI Systems

Artificial intelligence in the life science sector is not banned, but it is subject to some strict regulations due to its impact on patient health and safety.

For an AI system related to the life science sector to be considered high-risk, it must be intended for use as a safety-related component of a product covered by EU harmonisation legislation such as, among others, the MDR and IVDR (all listed in the AI Act) or the AI system itself must be such a product;

Under the AI Act, any AI system that is a Class IIa medical device or higher or that constitutes a safety component will be considered “high risk.”

Notwithstanding the above, the following systems are considered high-risk:

  • Systems used by public authorities to assess the eligibility of individuals for essential public benefits and services, including healthcare;
  • Systems designed to triage patients in medical emergencies.

Any high-risk AI system will be assessed before being placed on the EU market and throughout its life cycle. Users of these systems will have the right to lodge a complaint with designated national authorities.

Requirements for High-Risk AI Systems

AI systems must comply with the requirements set out in the regulation, taking into account their intended use and the current state of technical knowledge. If, in addition to the provisions of the AI Act , other EU regulations also apply to the system, then the system providers are responsible for full compliance.

The Artificial Intelligence Regulation requires providers of high-risk AI systems to implement a risk management system. This system should include the identification, analysis, assessment and mitigation of potential risks associated with the use of AI. For companies operating in the life science sector, this means, in particular: the need to identify potential risks to patients’ health resulting from the use of AI, e.g. in diagnostics; the need to have and implement risk minimization measures such as regular testing of AI systems or monitoring the operation of systems in real clinical conditions and taking into account the impact on vulnerable groups, including children under 18 years of age.

High-risk AI systems must meet data and data governance requirements. Data governance includes data collection, processing, labeling, bias analysis, and vulnerability detection. High-risk AI systems must be developed using adequate, high-quality training, validation, and test data sets; data must be representative, complete, and error-free, taking into account the specifics of the context of their use; providers may exceptionally process special categories of personal data for the purpose of eliminating bias, provided that strict safeguards and privacy principles are met (Article 10(5)(af) AI Act ).

The AI Act contains requirements imposed on the AI system provider in terms of maintaining technical documentation. It specifies the obligation to prepare and update technical documentation for high-risk AI systems before they are introduced to the market or put into service. Such documentation must include: the provider’s details; a description of the AI’s purpose, the level of accuracy and resilience of the system; information on the data used to train the AI system; how the human influence on the AI’s operation is ensured, etc.

Importantly, the commission may modify documentation requirements in response to technological advances.

Article 12 deals with event logging, which means that high-risk AI systems must have an automatic event logging function that allows them to be monitored throughout the system lifecycle.

High-risk AI systems must provide transparency, allowing those using them to interpret and properly use the results. For the life sciences sector, this means providing clear instructions and information to healthcare professionals and patients about how the system being delivered works, its limitations, and potential risks.

The AI Act guarantees human oversight of high-risk AI systems to enable effective intervention in the event of irregularities. Such oversight helps prevent risks to human health, safety and fundamental rights. The results of diagnoses proposed by AI systems must be subject to medical supervision to eliminate error, appropriately trained personnel must be able to change the system’s decision, and the final responsibility for, for example, patient treatment cannot rest with AI.

The regulation also includes requirements regarding the reliability and security of high-risk AI systems. Artificial intelligence algorithms used in the medical industry in diagnostics, medical image analysis or therapy personalization must be of the highest quality to reduce the risk of incorrect diagnoses and clinical decisions. Medical data constitute sensitive data (Article 24 of the Act of 29 August 1997 on the protection data personal data ) , therefore AI systems used in the life science sector must fulfill high standards protection By cyberattacks By whole cycle life.

Accuracy and performance levels should be clearly defined in the instruction manual.

Obligations of suppliers

The AI Act imposes stringent security, oversight and regulatory compliance requirements on suppliers of high-risk AI systems before the system is introduced to the EU market and throughout its life cycle. Key obligations of entities related to the medical industry include, for example, the need to adapt the AI system to MDR and IVDR standards; developing and maintaining complete diagnostic documentation that allows for audit and verification; providing users (doctors, pharmacists) with information that allows them to interpret the operation of the AI by, among other things, organising training and system monitoring systems; continuous human supervision of AI decision-making processes and appropriate protection of databases against cyberattacks through data encryption and access control.

Obligations of entities using (users of) high-risk AI systems.

In addition to a number of obligations for AI system providers, the AI Regulation also specifies the responsibility of users, especially when it comes to high-risk systems. The basic requirement imposed on users is to use the AI system in accordance with its intended use, the provider’s instructions, and to prohibit modification of its operation in a way that increases risk.

Companies using a high-risk AI system are required to provide appropriate training to their staff on how to use it, interpret results, and the potential risks. This also involves entrusting the supervisory function over the system to people with the appropriate skills, training, and authorizations, because, as mentioned earlier, high-risk systems cannot operate fully autonomously.

Upon noticing any irregularities, users are obliged to report risks and incidents to the supplier, distributor and supervisory authorities.

Penalties

The AI Act provides for sanctions in the form of fines for violating the provisions of the regulation. The detailed regulation of this issue has been left to the competence of the Member States, the procedures may differ depending on the location of both the provider and the user of the AI system. The general principle is that the penalties provided for must be effective, proportionate and dissuasive (Article 99 paragraph 1 of the AI Act ).

However, a severe penalty is foreseen for violating the provisions on prohibited practices in the field of AI systems – up to 35 million euros or 7% of the total annual global turnover of the previous year – whichever is higher. In the case of violation of other obligations resulting from the regulation, the upper limit of the administrative penalty is 15 million euros or 3% of the total annual global turnover, whichever is higher.

Summary

The AI Act changes the face of the life science sector by introducing regulations that stabilize the use of artificial intelligence systems, which increases the guarantee of the safety of using such systems, allows for the improvement of the medical industry and ensures the protection of patient rights and data at a high level. At the same time, through rigorous certification requirements, it may slow down the implementation of medical innovations driven by artificial intelligence.

Bibliography:

S. Lamb, L. Maisnier-Boche, The Impact of the New EU AI Act on the Medtech and Life Sciences Sector , 2024

D. Flisak, Act on Artificial Intelligence, LEX/el. 2024.

Act of 29 August 1997 on the Protection of Personal Data (Journal of Laws of 2016, item 922, as amended).

Ordinance Parliament European and Council Regulation (EU) 2024/1689 of 13 June 2024 on establishment harmonized regulations regarding artificial intelligence and changes Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 ( Regulation artificial intelligence ) Text having meaning for the EEA (OJ EU L 1 of 2024, item 1689).

https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

UP