Publication date: February 25, 2026
Slide 1 – title: introduction
We are facing changes: AI in the pharmaceutical industry is no longer a foreign concept. It means shorter trials, more accurate diagnoses, and the potential for drugs where we previously had given up hope. But let’s be honest, technology without a roadmap is risky. That’s why the EMA and FDA sat down to create a ten-page guide to “good practice.” These aren’t rigid rules, but a foundation of trust. We can encapsulate these 10 principles in three logical pillars.
Slide 2 Three Pillars
Before we dive into each of the ten principles in detail, let’s take a bird’s-eye view. To make these guidelines useful in everyday research and regulatory work, we’ve organized them into three logical pillars.
The first pillar is the foundation: organization and people. Here, we focus on who is behind the technology and how to ensure that AI remains under human control.
The second pillar is the heart of the technology. This is where we address technical quality, data engineering, and model integrity. This is the most technical part, where ethics meets mathematics.
And finally, Pillar Three: Accountability and Lifecycle. AI is not a product that we buy once and forget about. It is a living, evolving system that requires constant monitoring and clear communication with the patient.
Slide 3 First Rule
Getting down to specifics, the first principle is Human-Centered Design.
While this sounds like an ethical requirement, in a regulated environment, it is primarily an operational requirement. The idea is for the drug development process to take precedence over technology, not the other way around. The theoretical foundation of this principle is not new, autonomous norms, but proven bioethical standards, which the European Commission’s Expert Group has framed as Trustworthy AI. An AI system in the drug lifecycle cannot be fully autonomous. Its role is advisory and supportive, meaning that at every critical stage of the decision-making process, humans must have the ability to intervene and verify. However, designing for humans is not just about ethics; it also involves technical resilience. The system must be designed to be insensitive to errors, failures, or attempts at external data manipulation. Another aspect is the elimination of bias. Algorithms learning from historical data can unconsciously reproduce prejudices, for example, by discriminating against specific population groups based on gender or ethnicity. The human-centric principle compels us to strive for the inclusiveness of these models. All of this is reflected in the EU AI Act. Although this regulation doesn’t define “trustworthy AI” in a dictionary way, it does introduce strict control mechanisms. It prohibits manipulative practices and imposes strict risk management obligations on high-impact systems. To summarize this principle: in the human-machine relationship, we remain the guarantor of ethics and the ultimate point of reference. AI is meant to be a precise tool, but always supervised.
Slide 4
This approach directly reflects the structure of the EU AI Act, which categorizes systems into four levels. In drug research, we must be particularly vigilant about two of them. First, unacceptable risk. This includes manipulative systems that could exploit the vulnerabilities of patients or clinical trial participants. Such practices are strictly prohibited in the EU. However, the high-risk category raises the greatest interpretation challenges. It is worth noting that systems used in pharmaceuticals rarely fall into this category automatically. They most often do so under Article 6(1), if they are considered a “safety-related element” of a sector-regulated product. Simply put: if a failure of your algorithm could directly endanger a patient’s health, your system becomes a high-risk system. This necessitates mandatory conformity assessment. The remaining systems are those with limited risk—where the main requirement is transparency, i.e., informing the user that they are interacting with AI—and systems with minimal risk, where we retain significant design freedom. A key implication of this principle is the concept of proportionate validation. The closer we get to decisions critical to human health, the more rigorous, formal, and intensive our oversight and testing processes must be. It is the context of use, not just the technology itself, that defines our regulatory obligations.
Slide 5: Adherence to Standards
Concluding the first pillar of our considerations, we move on to Principle Number Three: Compliance with Standards. In the drug development process, AI does not operate in a vacuum. It must coexist harmoniously with existing and extremely rigorous pharmaceutical standards. When discussing the legal aspect, the foundation is, of course, the aforementioned AI Act, but we cannot forget about the GDPR in the context of patient data protection or the anti-discrimination directives. These define the limits of what we are allowed to design. However, it is crucial to layer this with Good Manufacturing Practices (GxP). AI technologies must fully comply with the principles of GMP (Good Manufacturing Practice) and GCP (Good Clinical Practice). Regulators such as the EMA and FDA recognize that innovation often outpaces legislation. Therefore, regulatory sandboxes are a crucial tool mentioned in this principle. These are safe, controlled environments that allow us to test the limits of AI technologies under the watchful eye of supervisors before they are adopted for mass use. Technical standards, such as ISO standards for information security and medical algorithm validation, complement this picture. Only the combination of these four worlds: HLEG ethics, AI Act law, GxP practice, and ISO engineering, guarantees that an AI system is not only innovative but, above all, safe and reliable in the pharmaceutical context.
Slide 6 Clear context of use
We move on to Principle 4: Clear Context of Use. In pharmaceutics, we don’t build AI systems for everything—each must have a precisely assigned function. This is where the ISO 42001 standard comes in. It’s the global standard for AI management. Why do we need it? Because it forces organizations to document exactly what a given system does and what decisions it supports. This isn’t just a formality—it proves to the regulator that we have control over the technology. Why is it crucial? Because according to Article 6 of the AI Act, it is this definition of purpose that determines whether a system is ‘high risk.’ If ISO helps us demonstrate that AI affects drug safety, we automatically enter the strictest legal rigors. Without a clear context, it’s impossible to reliably assess a system’s compliance with the law.
Slide 7 Multidisciplinary expertise
The fifth principle speaks of multidisciplinary expertise. In pharmacy, building an AI model isn’t just a task for programmers. It’s a team effort involving engineers, biologists, pharmacists, lawyers, and ethicists—and this throughout the system’s lifecycle. Why is this so important? Because today’s models, especially general-purpose models (GPAI), are incredibly complex. Their definition in the EMA/FDA guidelines directly references Article 3 of the AI Act: they are powerful systems that learn from massive datasets. Only a team of experts from various fields can guarantee that the input data is correct, the results make medical sense, and the model does not violate the law. The AI Act places special scrutiny on “high-impact” systems. In practice, this means that the model must be not only mathematically correct but, above all, clinically safe. Without collaboration between physicians and programmers, AI will remain a black box, barred from commercialization by any regulator.
Slide 8 Data and Documentation Management
We move on to the sixth point: Data and documentation management. Throughout the drug lifecycle, data must be not only accurate but, above all, transparent and verifiable. EMA and FDA guidelines are categorical here: every analytical decision and every processing step must allow for full reconstruction of events. In practice, this means evolving the classic standard to the ALCOA++ format. Documentation cannot end with the algorithm output. According to GxP requirements, it must encompass the entire data engineering path, from the original source, through cleansing, to input into the AI model. Why is this so critical? FDA statistics are alarming: as many as 80% of warning letters (Warning Letters) regarding data integrity are due to gaps in this area. In the age of AI, inspectors like the EMA and our national GIF will no longer be satisfied with system logs alone. They expect advanced audit trails (Audit Trails) that are proactively reviewed to detect manipulation or errors. We must also remember to store data permanently and easily, while maintaining the strictest GDPR requirements. Only such a structured supervision guarantees that the results obtained using AI are reliable and—most importantly—will withstand audits.
Slide 9 Design and Development of Practice Models
Principle seven is Engineering Rigor. We apply the GAMP 5 standard, which requires rigorous validation: every algorithm function must be tested and documented before it is released for use. Because AI is constantly learning, we introduce MLOps supervision, an automatic monitor that verifies that the model’s quality doesn’t degrade after leaving the lab. Moving away from “black boxes” and toward explainability (XAI) is crucial. According to the ISO 23894 standard, the system must show the physician the logical rationale behind its decision—for example, which medical parameters prevailed in the assessment. The whole thing is secured by a “safety net” in the form of human-centered design (ISO 9241). The system must be controllable: the physician has the right to reject the machine’s suggestion, and the algorithm is required to clearly communicate the limits of its confidence. This ensures full verifiability of the process.
Slide 10 Pillar III
We move on to the final pillar: Responsibility and Lifecycle. Principle eight brings the risk-based approach to the operational level. Here, we no longer ask ‘if’ a system is risky, but ‘how hard’ we need to test it. Human-AI interaction is crucial. According to Article 14 of the AI Act, the system must be designed with human oversight to prevent so-called automation bias—that is, uncritical trust in the machine. Validation must go beyond ideal conditions, examining failure modes and resilience to data errors, as explicitly mandated by Article 15 of the AI Act. For this purpose, we use data that is both technically reliable and clinically relevant.
Principle nine is Continuous Surveillance. In pharma, we are moving away from a single-step validation model towards quality management throughout the AI lifecycle. We must constantly monitor the system, reacting to any changes in input data to avoid so-called ‘model drift.’
The tenth principle concludes: Clear communication. This fulfills the transparency requirement of Article 13 of the AI Act. But it also requires the use of ‘plain language’ (ISO 24495-1). The result generated by AI cannot be a hermetic code. It must be understandable to both the doctor and the patient – presented in short, specific messages that explain what the result means here and now. This is the only way to build full trust in the technology.