<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>EMA and FDA - KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</title>
	<atom:link href="https://www.kg-legal.eu/info/tag/ema-and-fda/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kg-legal.eu/info/tag/ema-and-fda/</link>
	<description>KIELTYKA GLADKOWSKI LEGAL &#124; CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</description>
	<lastBuildDate>Wed, 25 Feb 2026 19:37:40 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>10 principles of good practice in the application of AI in the drug life cycle</title>
		<link>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/10-principles-of-good-practice-in-the-application-of-ai-in-the-drug-life-cycle/</link>
					<comments>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/10-principles-of-good-practice-in-the-application-of-ai-in-the-drug-life-cycle/#respond</comments>
		
		<dc:creator><![CDATA[jakub]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 19:37:40 +0000</pubDate>
				<category><![CDATA[PHARMACEUTICAL, HEALTHCARE & LIFE SCIENCES LAW]]></category>
		<category><![CDATA[AI in the pharmaceutical industry]]></category>
		<category><![CDATA[EMA and FDA]]></category>
		<guid isPermaLink="false">https://www.kg-legal.eu/?p=8655</guid>

					<description><![CDATA[<p>Publication date: February 25, 2026 Slide 1 &#8211; title: introduction We are facing changes: AI in the pharmaceutical industry is no longer a foreign concept. It means shorter trials, more accurate diagnoses, and the potential for drugs where we previously had given up hope. But let&#8217;s be honest, technology without a roadmap is risky. That&#8217;s [&#8230;]</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/10-principles-of-good-practice-in-the-application-of-ai-in-the-drug-life-cycle/">10 principles of good practice in the application of AI in the drug life cycle</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">Publication date: February 25, 2026</mark></strong></p>



<figure class="wp-block-video"><video controls src="https://www.kg-legal.eu/wp-content/uploads/2026/02/AI-in-Drug-Development_FDA-and-EMA_ABA-Call_25.02.2026.mp4"></video></figure>



<p><strong>Slide 1 &#8211; title: introduction</strong><strong></strong></p>



<p>We are facing changes: AI in the pharmaceutical industry is no longer a foreign concept. It means shorter trials, more accurate diagnoses, and the potential for drugs where we previously had given up hope. But let&#8217;s be honest, technology without a roadmap is risky. That&#8217;s why the EMA and FDA sat down to create a ten-page guide to &#8220;good practice.&#8221; These aren&#8217;t rigid rules, but a foundation of trust. We can encapsulate these 10 principles in three logical pillars.</p>



<span id="more-8655"></span>



<p><strong>Slide 2 Three Pillars</strong><strong></strong></p>



<p>Before we dive into each of the ten principles in detail, let&#8217;s take a bird&#8217;s-eye view. To make these guidelines useful in everyday research and regulatory work, we&#8217;ve organized them into three logical pillars.</p>



<p>The first pillar is the foundation: organization and people. Here, we focus on who is behind the technology and how to ensure that AI remains under human control.</p>



<p>The second pillar is the heart of the technology. This is where we address technical quality, data engineering, and model integrity. This is the most technical part, where ethics meets mathematics.</p>



<p>And finally, Pillar Three: Accountability and Lifecycle. AI is not a product that we buy once and forget about. It is a living, evolving system that requires constant monitoring and clear communication with the patient.</p>



<p><strong>Slide 3 First Rule</strong><strong></strong></p>



<p>Getting down to specifics, the first principle is Human-Centered Design.</p>



<p>While this sounds like an ethical requirement, in a regulated environment, it is primarily an operational requirement. The idea is for the drug development process to take precedence over technology, not the other way around. The theoretical foundation of this principle is not new, autonomous norms, but proven bioethical standards, which the European Commission&#8217;s Expert Group has framed as Trustworthy AI. An AI system in the drug lifecycle cannot be fully autonomous. Its role is advisory and supportive, meaning that at every critical stage of the decision-making process, humans must have the ability to intervene and verify. However, designing for humans is not just about ethics; it also involves technical resilience. The system must be designed to be insensitive to errors, failures, or attempts at external data manipulation. Another aspect is the elimination of bias. Algorithms learning from historical data can unconsciously reproduce prejudices, for example, by discriminating against specific population groups based on gender or ethnicity. The human-centric principle compels us to strive for the inclusiveness of these models. All of this is reflected in the EU AI Act. Although this regulation doesn&#8217;t define &#8220;trustworthy AI&#8221; in a dictionary way, it does introduce strict control mechanisms. It prohibits manipulative practices and imposes strict risk management obligations on high-impact systems. To summarize this principle: in the human-machine relationship, we remain the guarantor of ethics and the ultimate point of reference. AI is meant to be a precise tool, but always supervised.</p>



<p><strong>Slide 4</strong><strong></strong></p>



<p>This approach directly reflects the structure of the EU AI Act, which categorizes systems into four levels. In drug research, we must be particularly vigilant about two of them. First, unacceptable risk. This includes manipulative systems that could exploit the vulnerabilities of patients or clinical trial participants. Such practices are strictly prohibited in the EU. However, the high-risk category raises the greatest interpretation challenges. It is worth noting that systems used in pharmaceuticals rarely fall into this category automatically. They most often do so under Article 6(1), if they are considered a &#8220;safety-related element&#8221; of a sector-regulated product. Simply put: if a failure of your algorithm could directly endanger a patient&#8217;s health, your system becomes a high-risk system. This necessitates mandatory conformity assessment. The remaining systems are those with limited risk—where the main requirement is transparency, i.e., informing the user that they are interacting with AI—and systems with minimal risk, where we retain significant design freedom. A key implication of this principle is the concept of proportionate validation. The closer we get to decisions critical to human health, the more rigorous, formal, and intensive our oversight and testing processes must be. It is the context of use, not just the technology itself, that defines our regulatory obligations.</p>



<p><strong>Slide 5: Adherence to Standards</strong><strong></strong></p>



<p>Concluding the first pillar of our considerations, we move on to Principle Number Three: Compliance with Standards. In the drug development process, AI does not operate in a vacuum. It must coexist harmoniously with existing and extremely rigorous pharmaceutical standards. When discussing the legal aspect, the foundation is, of course, the aforementioned AI Act, but we cannot forget about the GDPR in the context of patient data protection or the anti-discrimination directives. These define the limits of what we are allowed to design. However, it is crucial to layer this with Good Manufacturing Practices (GxP). AI technologies must fully comply with the principles of GMP (Good Manufacturing Practice) and GCP (Good Clinical Practice). Regulators such as the EMA and FDA recognize that innovation often outpaces legislation. Therefore, regulatory sandboxes are a crucial tool mentioned in this principle. These are safe, controlled environments that allow us to test the limits of AI technologies under the watchful eye of supervisors before they are adopted for mass use. Technical standards, such as ISO standards for information security and medical algorithm validation, complement this picture. Only the combination of these four worlds: HLEG ethics, AI Act law, GxP practice, and ISO engineering, guarantees that an AI system is not only innovative but, above all, safe and reliable in the pharmaceutical context.</p>



<p><strong>Slide 6 Clear context of use</strong><strong></strong></p>



<p>We move on to Principle 4: Clear Context of Use. In pharmaceutics, we don&#8217;t build AI systems for everything—each must have a precisely assigned function. This is where the ISO 42001 standard comes in. It&#8217;s the global standard for AI management. Why do we need it? Because it forces organizations to document exactly what a given system does and what decisions it supports. This isn&#8217;t just a formality—it proves to the regulator that we have control over the technology. Why is it crucial? Because according to Article 6 of the AI Act, it is this definition of purpose that determines whether a system is &#8216;high risk.&#8217; If ISO helps us demonstrate that AI affects drug safety, we automatically enter the strictest legal rigors. Without a clear context, it&#8217;s impossible to reliably assess a system&#8217;s compliance with the law.</p>



<p><strong>Slide 7 Multidisciplinary expertise</strong><strong></strong></p>



<p>The fifth principle speaks of multidisciplinary expertise. In pharmacy, building an AI model isn&#8217;t just a task for programmers. It&#8217;s a team effort involving engineers, biologists, pharmacists, lawyers, and ethicists—and this throughout the system&#8217;s lifecycle. Why is this so important? Because today&#8217;s models, especially general-purpose models (GPAI), are incredibly complex. Their definition in the EMA/FDA guidelines directly references Article 3 of the AI Act: they are powerful systems that learn from massive datasets. Only a team of experts from various fields can guarantee that the input data is correct, the results make medical sense, and the model does not violate the law. The AI Act places special scrutiny on &#8220;high-impact&#8221; systems. In practice, this means that the model must be not only mathematically correct but, above all, clinically safe. Without collaboration between physicians and programmers, AI will remain a black box, barred from commercialization by any regulator.</p>



<p><strong>Slide 8 Data and Documentation Management</strong><strong></strong></p>



<p>We move on to the sixth point: Data and documentation management. Throughout the drug lifecycle, data must be not only accurate but, above all, transparent and verifiable. EMA and FDA guidelines are categorical here: every analytical decision and every processing step must allow for full reconstruction of events. In practice, this means evolving the classic standard to the ALCOA++ format. Documentation cannot end with the algorithm output. According to GxP requirements, it must encompass the entire data engineering path, from the original source, through cleansing, to input into the AI model. Why is this so critical? FDA statistics are alarming: as many as 80% of warning letters (Warning Letters) regarding data integrity are due to gaps in this area. In the age of AI, inspectors like the EMA and our national GIF will no longer be satisfied with system logs alone. They expect advanced audit trails (Audit Trails) that are proactively reviewed to detect manipulation or errors. We must also remember to store data permanently and easily, while maintaining the strictest GDPR requirements. Only such a structured supervision guarantees that the results obtained using AI are reliable and—most importantly—will withstand audits.</p>



<p><strong>Slide 9 Design and Development of Practice Models</strong></p>
<p>Principle seven is Engineering Rigor. We apply the GAMP 5 standard, which requires rigorous validation: every algorithm function must be tested and documented before it is released for use. Because AI is constantly learning, we introduce MLOps supervision, an automatic monitor that verifies that the model&#8217;s quality doesn&#8217;t degrade after leaving the lab. Moving away from &#8220;black boxes&#8221; and toward explainability (XAI) is crucial. According to the ISO 23894 standard, the system must show the physician the logical rationale behind its decision—for example, which medical parameters prevailed in the assessment. The whole thing is secured by a &#8220;safety net&#8221; in the form of human-centered design (ISO 9241). The system must be controllable: the physician has the right to reject the machine&#8217;s suggestion, and the algorithm is required to clearly communicate the limits of its confidence. This ensures full verifiability of the process.</p>
<p><strong>Slide 10 Pillar III</strong></p>
<p>We move on to the final pillar: Responsibility and Lifecycle. Principle eight brings the risk-based approach to the operational level. Here, we no longer ask &#8216;if&#8217; a system is risky, but &#8216;how hard&#8217; we need to test it. Human-AI interaction is crucial. According to Article 14 of the AI Act, the system must be designed with human oversight to prevent so-called automation bias—that is, uncritical trust in the machine. Validation must go beyond ideal conditions, examining failure modes and resilience to data errors, as explicitly mandated by Article 15 of the AI Act. For this purpose, we use data that is both technically reliable and clinically relevant.</p>
<p>Principle nine is Continuous Surveillance. In pharma, we are moving away from a single-step validation model towards quality management throughout the AI lifecycle. We must constantly monitor the system, reacting to any changes in input data to avoid so-called &#8216;model drift.&#8217;</p>
<p>The tenth principle concludes: Clear communication. This fulfills the transparency requirement of Article 13 of the AI Act. But it also requires the use of &#8216;plain language&#8217; (ISO 24495-1). The result generated by AI cannot be a hermetic code. It must be understandable to both the doctor and the patient – presented in short, specific messages that explain what the result means here and now. This is the only way to build full trust in the technology.</p>
<p> </p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/10-principles-of-good-practice-in-the-application-of-ai-in-the-drug-life-cycle/">10 principles of good practice in the application of AI in the drug life cycle</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/10-principles-of-good-practice-in-the-application-of-ai-in-the-drug-life-cycle/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://www.kg-legal.eu/wp-content/uploads/2026/02/AI-in-Drug-Development_FDA-and-EMA_ABA-Call_25.02.2026.mp4" length="3658454" type="video/mp4" />

			</item>
		<item>
		<title>EMA and FDA set common principles for AI in medicine development – January 2026</title>
		<link>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/</link>
					<comments>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/#respond</comments>
		
		<dc:creator><![CDATA[jakub]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 15:15:56 +0000</pubDate>
				<category><![CDATA[IT, NEW TECHNOLOGIES, MEDIA AND COMMUNICATION TECHNOLOGY LAW]]></category>
		<category><![CDATA[EMA]]></category>
		<category><![CDATA[EMA and FDA]]></category>
		<category><![CDATA[FDA]]></category>
		<guid isPermaLink="false">https://www.kg-legal.eu/?p=8629</guid>

					<description><![CDATA[<p>Publication date: February 12, 2026 In recent years, the importance of artificial intelligence (AI) in drug development, evaluation, and monitoring has grown significantly. AI technologies have the potential to accelerate research, improve predictions of drug efficacy and safety, and reduce the need for animal testing. At the same time, their use presents new challenges. AI [&#8230;]</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/">EMA and FDA set common principles for AI in medicine development – January 2026</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color"><strong>Publication date: February 12, 2026</strong></mark></p>



<p>In recent years, the importance of artificial intelligence (AI) in drug development, evaluation, and monitoring has grown significantly. AI technologies have the potential to accelerate research, improve predictions of drug efficacy and safety, and reduce the need for animal testing. At the same time, their use presents new challenges. AI models can make errors, be susceptible to unforeseen risks, or use data in a non-transparent manner. To fully realize the benefits of AI while minimizing risks, it is essential to establish clear and common principles for the use of these technologies. In response to these challenges, <strong>the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have jointly developed ten principles of good practice for the use of AI in the drug lifecycle</strong>. This document is fundamental and a framework, not a binding legal regulation – it provides general directions and guidelines that should guide drug manufacturers, applicants, and regulators. These principles indicate how AI should be designed and used to ensure it is ethical, safe, transparent, and based on reliable data. The ten principles also identify areas where international regulators, standards-setting organizations, and other collaborating entities can work together to promote good practice in drug development. These areas of collaboration include: conducting scientific research, creating educational tools and resources for market participants, international harmonization, and developing consensus standards. To facilitate initial analysis, these principles can be grouped into three logical pillars. Principles 1-3 address organizational foundations and people, focusing on interdisciplinary team expertise and ensuring that AI remains under human control within specific, established governance processes. Principles 4-7 address technical quality and model integrity, addressing the &#8220;heart&#8221; of the technology. Principles 8-10 address accountability and lifecycle, defining standards for documentation, clear communication with users, and continuous monitoring of the model after its implementation. Below is a detailed summary of the 10 principles of good practice for AI in the drug lifecycle:</p>



<span id="more-8629"></span>



<p><strong>1. Human-Centered Design. </strong>The development and use of AI technologies in the drug development lifecycle should be consistent with ethical values and human-centered. The ethical principles cited in the EMA and FDA documents do not constitute a standalone normative framework; rather, they were drawn from earlier standards for the protection of fundamental rights and bioethics and then incorporated into the &#8220;Trustworthy AI&#8221; framework developed by the High-Level Expert Group on AI (HLEG). The Assessment List for Trustworthy AI defines seven fundamental requirements for trustworthy AI, which provide a practical tool for implementing ethical values in AI systems. These principles assume that an AI system should support human decisions, enable human intervention, and not make decisions autonomously, which is directly related to the premise of &#8220;human-centeredness.&#8221; Another requirement is the AI&#8217;s technical resilience to errors, failures, and attacks, ensuring the system&#8217;s security and predictability. From an ethical perspective, it is also crucial that all collected and processed data comply with applicable law, and that the system&#8217;s decision-making processes remain fully verifiable. AI implementation should consider the potential risks associated with its use and provide mechanisms for oversight, verification, and preventive measures to minimize undesirable consequences. The document also emphasizes that AI systems must not contribute to exacerbating existing prejudices or discrimination, but should instead promote equality, justice, and the well-being of people and the environment. A final important principle is accountability – individuals and organizations that design, implement, and use AI systems are responsible for their performance and consequences.</p>



<p>The AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council) contains many similar terms, such as &#8220;human-centric&#8221; and &#8220;trustworthy AI&#8221;, although the act itself does not provide a formal definition. This term appears repeatedly in the preamble and in Article 1, allowing for interpretation of its meaning in regulatory and practical contexts. The AI Act establishes a legally binding framework for AI systems in the EU, including prohibitions on certain practices, transparency obligations, and risk management requirements for high-risk systems. Therefore, conceptual overlap can be observed between the HLEG/EMA/FDA guidelines and the AI Act, although they serve different functions – the guidelines provide ethical and design direction, while the AI Act defines legal obligations.</p>



<p><strong>2. Risk-Based Approach</strong>. The development and use of AI technologies follows a risk-based approach, with proportionate validation, risk mitigation, and oversight based on context of use and model-specific risk. Beginning the analysis with the first premise, namely the &#8220;risk-based approach,&#8221; this model is known from the AI Act. It identifies four risk levels for AI systems: unacceptable risk, high risk, transparency risk, minimal risk, or no risk. Article 5 of the AI Act introduces a catalog of prohibited practices, including AI systems whose use is deemed unacceptable due to a threat to fundamental rights. In the context of the use of AI in drug research, prohibitions relating to the protection of individual autonomy and vulnerability should be specifically mentioned, particularly the prohibition on the use of manipulative systems and systems exploiting the specific vulnerabilities of clinical trial participants. Moving on to the high-risk category, AI systems used in the development of medicinal products are generally not classified as high-risk systems under Annex III of the AI Act, but may be deemed so under Article 6(1) if they constitute a &#8220;safety-related component&#8221; of a sectorally regulated product and are subject to mandatory conformity assessment before marketing or use. The term &#8220;security-related component&#8221; refers to a component whose failure or malfunction could pose a threat to the health, safety, or security of individuals, or property. Systems explicitly listed in Annex III are also considered high-risk AI systems, including remote biometric identification systems, crime risk assessment systems, and systems that make decisions that significantly impact an individual&#8217;s legal status. The legislator has provided for the possibility of exempting certain systems from this category if they do not pose a significant risk to health, safety, or fundamental rights and do not significantly influence the outcome of the decision-making process. This exemption requires a documented self-assessment by the provider and is subject to review by the competent national authorities. For limited-risk AI systems, the legislator primarily provides for transparency obligations aimed at preventing users from being misled. This includes, among other things, the obligation to disclose that the system is based on AI and to indicate its limitations. The category of minimal-risk systems includes all other AI systems that do not fall into any of the above-mentioned groups. For these systems, the AI Act does not impose specific regulatory requirements, leaving them free to design and implement them.</p>



<p>The reference to &#8220;proportionate validation&#8221; should be understood as a consequence of the AI Act&#8217;s adoption of a risk-based approach. This means that the scope, intensity, and formalization of validation processes for AI systems should be tailored to the level of risk the system poses to health, safety, or fundamental rights. In other words, the scope of validation, oversight, and requirements for an AI system depend on both the level of risk posed by the model and the context of its use. The greater the potential risk, the more stringent the requirements.</p>



<p><strong>3. Compliance with standards. </strong>AI technologies must comply with applicable legal, regulatory, technical, and ethical standards, including the principles of good practice in pharmacy and data protection. Legally, this includes, among others, Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (AI Act), which establishes obligations for AI system operators, prohibitions on certain practices, and requirements for high-risk systems, as well as the personal data protection provisions of the GDPR, anti-discrimination directives, and national regulations on clinical trials and the marketing of medicinal products. Regulatory and good practice standards include Good Manufacturing Practice (GMP), Good Clinical Practice (GCP), ICH guidelines, and the use of regulatory sandboxes, which enable testing of new technologies in a controlled environment while maintaining the safety of patients and researchers. Technically and ethically, the aforementioned HLEG/ALTAI guidelines for trustworthy AI must be taken into account. Additionally, compliance with ISO standards for information security and validation of medical algorithms ensures the technical consistency and reliability of AI systems in the pharmaceutical context.</p>



<p><strong>4. Clear context of use</strong>. Every AI system must have clearly defined objectives and a precisely defined scope of application, as reflected in the ISO/IEC 42001 standard. This standard requires organizations to document the context in which the system is used, specify the decision-making processes it supports, and identify the risks arising from its specific nature. This requirement is inextricably linked to the qualification of the risk level under the AI Act. According to Article 6 of this regulation, a precise definition of the system&#8217;s functions and its potential impact on health and property is necessary to determine whether AI constitutes a &#8220;safety-related element of a product&#8221;. High-risk systems must have a transparently defined purpose, which is a sine qua non for conducting a reliable compliance assessment and implementing adequate oversight.</p>



<p><strong>5. Multidisciplinary expertise</strong>. Establishing the operational framework for the system requires multidisciplinary expertise, which should be integrated into the project throughout the technology lifecycle. This principle reflects the complexity of contemporary solutions, particularly general-purpose AI (GPAI) models. It is worth noting that their characteristics in the pharmaceutical regulatory guidelines explicitly refer to the legal definition contained in Article 3(63) of the AI Act, which describes models characterized by large scale, the ability to learn from diverse data, and competence in performing a wide range of tasks. According to the EMA/FDA, collaboration between specialists in AI technology, biology, pharmacy, law, and ethics is not merely a formality but a necessary condition for ensuring model reliability. A multidisciplinary approach guarantees higher quality input data, correct interpretation of results in the specific clinical context, and full consideration of the regulatory environment. This concept aligns with the AI Act&#8217;s classification of &#8220;high-impact&#8221; models, where systems with a significant potential to generate systemic risks are subject to special scrutiny. In pharmaceutical practice, the involvement of experts from various fields allows us to create a model that not only complies with the law, but above all works effectively and safely in real medical applications.</p>



<p><strong>6. Data and Documentation Management</strong>. Another pillar of technology security is data management and documentation, which must be transparent and verifiable throughout the drug lifecycle. According to EMA and FDA guidelines, every stage of data processing and every analytical decision must be documented in a way that allows for full reconstruction of events. In pharmaceutical practice, this means maintaining the ALCOA++ standard, which has evolved from the original five principles to the current set of ten attributes, including completeness, consistency, and durability of the record. This documentation cannot be limited to final results; in accordance with GxP requirements, it must encompass the entire &#8220;data engineering&#8221; process, from the original source to the final input into the AI model. This is crucial in the context of regulatory audits, as an analysis of FDA activities indicates that as many as 80% of warning letters regarding data integrity issued in recent years resulted from gaps in this area. Applying the ALCOA++ standard in the age of artificial intelligence requires healthcare entities to implement advanced audit trails that record every modification. Inspectors such as the EMA and the national Chief Pharmaceutical Inspectorate (GIF) currently expect not only system logs but also proactive and systematic review of these logs to detect potential manipulation or human error. This process must also consider the protection of privacy and sensitive data, which links GxP requirements with GDPR obligations. In this context, it is particularly important that the data be &#8220;persistent&#8221; and &#8220;available.&#8221; Such supervision ensures the credibility and verifiability of data obtained using AI. Whereas is should be noted that the term &#8220;GxP&#8221; is an umbrella term for specific regulated sub-areas, such as: GMP = Good Manufacturing Practice; GSP = Good Record Keeping Practice; PKB = Good Distribution Practice; GEP = Good Engineering Practice; GAMP = Good Automated Manufacturing Practice.</p>



<p><strong>7. Design and Development of Practice Models. </strong>The seventh principle is a technical confirmation that the AI system was not created haphazardly, but was built according to rigorous engineering standards. In pharmaceutical practice, this primarily means applying the GAMP 5 standard, which requires that each algorithm function be tested and verified before use (validation). Because AI systems have the ability to continuously learn, this principle also introduces modern oversight (MLOPs), which acts as a quality monitor. This protects the model from losing its effectiveness after it leaves the laboratory and reaches hospitals. A key element of safe design is the selection of data that is &#8220;fit for purpose.&#8221; This means that the model cannot learn from random information. This information must be representative, reflecting the diversity of patients (e.g., in terms of age, gender, or ethnicity), which prevents the development of erroneous algorithmic biases. This gives the model generalizability, ensuring that it will perform safely on every new patient, not just on the narrow group of individuals on whom it was trained. Regulators (EMA/FDA) prioritize moving away from &#8220;black box&#8221; models toward explainability (Explainable AI &#8211; XAI), which finds its technical support in the ISO/IEC 23894 standard. As part of risk management, this standard requires that the system be able to present logical rationale for its decisions. This means that the algorithm must indicate which specific medical parameters prevailed in a given clinical assessment, allowing the physician to substantively verify the result. This vision is complemented by the ISO 9241 (Human-Centered Design) standard. In medical AI, HCD is not understood as interface aesthetics, but as a security architecture that counteracts the phenomenon of thoughtless submission to machine suggestions. In accordance with the principles of this standard, such as error tolerance and controllability, the system design must minimize the effects of human error and guarantee the user the ability to override AI suggestions at any stage. The principle of self-descriptiveness, in turn, requires the system to clearly communicate its state and confidence limits, which directly addresses the technical robustness requirement stipulated in the EU AI Act. Ultimately, this ensures that the entire decision-making process of the algorithm is fully verifiable and trustworthy.</p>



<p><strong>8. Risk-based performance assessment. </strong>Risk-based performance assessments evaluate the entire system, including human-AI interactions, using data and metrics appropriate to the intended context of use, supported by predictive performance validation through appropriately designed testing and evaluation methods. Although the concept of a &#8220;risk-based approach&#8221; was discussed in detail in Principle 2, in the context of classifying systems under the AI Act, under Principle 8 it has a more operational dimension. In short, we no longer ask whether a system is risky, but rather to what extent we need to test it to meet safety requirements. A key element of this principle is defining and monitoring human-AI interaction. According to Article 14 of the AI Act, high-risk systems must be designed to enable effective human oversight. In pharmaceutical practice, this means creating a mechanism for continuous learning under human oversight, where the human is not just a passive recipient of the result, but an active operator filtering the algorithm&#8217;s suggestions. This model of collaboration allows for the verification of AI decisions and effectively counteracts the phenomenon of over-trust, in which medical personnel could uncritically accept erroneous system recommendations. A reliable performance assessment also requires the implementation of failure mode analysis. Instead of focusing solely on confirming the model&#8217;s predictive effectiveness, AI implementers must deliberately identify the system&#8217;s weaknesses and moments when the algorithm may miss safety signals (e.g., rare adverse drug reactions).</p>



<p>Referring to Article 15 of the AI Act is crucial here, as it imposes the obligation to ensure a high level of robustness and accuracy. In practice, this means that system validation cannot be limited to simulations under ideal conditions. It must include testing how the system responds to intentionally erroneous, incomplete, or unusual medical cases. In this context, the concept of data appropriate for use takes on a new definition. Looking at the SPIFD methodologies and FDA guidelines, data &#8220;appropriateness&#8221; should be understood as a selection process based on two specific parameters: reliability and relevance. Reliability does not refer to the substantive content itself, but rather to the technical reliability of the source; the researcher must prove that the data is consistent and complete. Relevance, in turn, requires the researcher to answer the question of whether the dataset (often derived from real-world data) actually represents the target population and contains the variables necessary to answer a specific clinical question.</p>



<p><strong>9. Lifecycle Management. </strong>Risk-based quality management systems are implemented throughout the AI technology lifecycle, specifically to identify, assess, and respond to emerging issues. AI technologies are subject to planned monitoring and periodic reassessment to ensure their proper functioning, for example, in the context of changes in input data. Therefore, we are not talking about a single validation, but rather continuous oversight.</p>



<p><strong>10. Clear and meaningful information. </strong>Results generated by AI should be presented in a simple and understandable way, so that users and patients can truly understand their meaning, significance, and limitations. In this context, a reference to Article 13 of the AI Act, which explicitly imposes the requirement of transparency, may be helpful. The regulation imposes a specific requirement to share specific parameters, while the EMA/FDA principle emphasizes language and communication. How is &#8220;clear language&#8221; understood? This is not an empty phrase, but a specific requirement based on standards such as ISO 24495-1 (Plain Language). In pharmaceutical and clinical practice, this means moving away from hermetic vocabulary toward messages accessible to the &#8220;average citizen.&#8221; Not only is style important, but also the structure of the text itself. More specifically, long, complex texts should be avoided. Instead, it is recommended to use short sentences, preferably in the active voice, avoiding the passive voice. Ultimately, clarity of communication comes down to explaining what a given result means in practice, currently in a given case.</p>



<p>The above analysis aimed to dissect the general EMA/FDA principles and demonstrate the specific technical and legal requirements underlying each term. This is an attempt to clarify the general terminology and demonstrate that each of these 10 principles has a technical equivalent that must be met for an AI model to be considered safe and reliable in a regulated environment.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Principle</strong><strong></strong></td><td><strong>Explanation</strong><strong></strong></td></tr><tr><td><strong>1. Human-Centered Design</strong></td><td>Moving from ethics to practice Trustworthy AI; the system must support human decision-making (Human-in-the-loop) and be resilient to attacks and errors.</td></tr><tr><td><strong>2.Risk-based approach</strong></td><td>Classification of the system under the AI Act; the scope of validation and oversight depends on whether the AI is a critical &#8220;safety element&#8221; of the medicinal product.</td></tr><tr><td><strong>3. Compliance with standards</strong></td><td>Combining the new legal framework with classic pharmaceutical practices: GxP, ICH and ISO standards to ensure full legality and quality.</td></tr><tr><td><strong>4. Clear context of use</strong></td><td>Obligation to document intended use (ISO/IEC 42001); precise definition of where the model&#8217;s competence ends and the risk of error begins.</td></tr><tr><td><strong>5. Multidisciplinary expertise</strong></td><td>Collaboration between IT, medicine and law as a filter for GPAI models; guaranteeing that the technical result will be correctly interpreted clinically.</td></tr><tr><td><strong>6.Data and documentation management</strong></td><td>Maintaining data integrity according to ALCOA++; full audit trail allowing for the replication of every decision and model modification.</td></tr><tr><td><strong>7. Design and development of practice models</strong></td><td>Moving from black boxes to explainability (XAI); using GAMP 5 engineering and MLOps post-implementation monitoring.</td></tr><tr><td><strong>8.Risk-Based Performance Assessment</strong></td><td>Testing for resistance to data errors (Art. 15 AI Act ) and selecting data in terms of their reliability and relevance to the patient population.</td></tr><tr><td><strong>9.Lifecycle Management</strong></td><td>Replacing one-time validation with continuous supervision; systematically detecting model quality degradation (drift) in real time.</td></tr><tr><td><strong>10.Clear and relevant information</strong></td><td>Translating statistics into Plain Language (ISO 24495-1); using short active sentences to facilitate quick medical decisions</td></tr></tbody></table></figure>



<p>Full text of FDA and EMA Guidelines can be accessed here:</p>



<p><a href="http://www.ema.europa.eu/en/documents/other/guiding-principles-good-ai-practice-drug-development_en.pdf">www.ema.europa.eu/en/documents/other/guiding-principles-good-ai-practice-drug-development_en.pdf</a></p>



<h2 class="wp-block-heading">Sources:</h2>



<p>European Medicines Agency &amp; US Food and Drug Administration. (2026, January 14). Guiding principles of good AI practice in drug development (EMA/FDA joint principles). European Medicines Agency.</p>



<p>High-Level Expert Group on Artificial Intelligence. (2020). Assessment List for Trustworthy AI (ALTAI). European Commission.</p>



<p>Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act).</p>



<p>Jędrzejczyk Maria (ed.), Szoszkiewicz Lukasz (ed.), Wydra Jędrzej (ed.), AI Act, Artificial Intelligence Act. Commentary</p>



<p>Quanticate, The ALCOA++ Principles for Data Integrity in Clinical Trials, August 28, 2025.</p>



<p>Explainable AI (XAI) – the key to understanding artificial intelligence, Kasia Szczesna. 05.12.2024.</p>



<p>IQVIA, Blog :Understanding AI, Data and Human Interaction in Pharmaceutical Development, Mike King, Senor Director of Product &amp; Stratgy, IQVIA, 06/02/2024.</p>



<p>National Library of Medicine &#8211; The Structured Process to Identify Fit-For-Purpose Data: A Data Feasibility Assessment Framework.</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/">EMA and FDA set common principles for AI in medicine development – January 2026</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
