<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>EMA - KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</title>
	<atom:link href="https://www.kg-legal.eu/info/tag/ema/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kg-legal.eu/info/tag/ema/</link>
	<description>KIELTYKA GLADKOWSKI LEGAL &#124; CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</description>
	<lastBuildDate>Wed, 11 Mar 2026 19:45:51 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>New transparency requirement for public financial support received for drug research and development</title>
		<link>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/new-transparency-requirement-for-public-financial-support-received-for-drug-research-and-development/</link>
					<comments>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/new-transparency-requirement-for-public-financial-support-received-for-drug-research-and-development/#respond</comments>
		
		<dc:creator><![CDATA[jakub]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 19:45:10 +0000</pubDate>
				<category><![CDATA[PHARMACEUTICAL, HEALTHCARE & LIFE SCIENCES LAW]]></category>
		<category><![CDATA[EMA]]></category>
		<category><![CDATA[R&D]]></category>
		<guid isPermaLink="false">https://www.kg-legal.eu/?p=8667</guid>

					<description><![CDATA[<p>Publication date: March 11, 2026 Medicines resulting from research and development (R&#38;D) activities include medicinal products developed through a multi-stage process of preclinical and clinical trials conducted to demonstrate their quality, safety, and efficacy. This process is lengthy, expensive, and fraught with a high risk of failure. It is financed by both private and public [&#8230;]</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/new-transparency-requirement-for-public-financial-support-received-for-drug-research-and-development/">New transparency requirement for public financial support received for drug research and development</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">Publication date: March 11, 2026</mark></strong></p>



<p>Medicines resulting from research and development (R&amp;D) activities include medicinal products developed through a multi-stage process of preclinical and clinical trials conducted to demonstrate their quality, safety, and efficacy. This process is lengthy, expensive, and fraught with a high risk of failure. It is financed by both private and public funds (according to research by Claudie Wild, Ozren Sehic, Louise Schmidt, and Daniel Fabian, &#8220;Public contributions to R&amp;D of medical innovations: A framework for analysis,&#8221; 26 publications were identified, which found that half of all approved medicines and &gt;90% of target medicines are linked to public sector institutions or their funding), including EU funds and national innovation support programs. From a pharmaceutical law perspective, R&amp;D medicines are subject to specific regulatory requirements at all stages of the product lifecycle, from clinical trials to marketing authorization and pharmacovigilance. In this context, the proposed changes to EU pharmaceutical regulations, concerning data standardization and information transparency, constitute part of a broader regulatory framework for the functioning of the innovative medicines market. The research and development process is as follows:</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="169" src="https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-1024x169.png" alt="" class="wp-image-8668" srcset="https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-1024x169.png 1024w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-300x50.png 300w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-768x127.png 768w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-1536x254.png 1536w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz1-2048x338.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<span id="more-8667"></span>



<p>The scope of the European Medicines Agency (EMA) is crucial to the transparency of the EU&#8217;s regulatory system for medicinal products. It is the central entity responsible for, among other things, the evaluation, monitoring, and safety supervision of medicines. The EMA collects extensive scientific, clinical, and pharmacovigilance data from across the European Union and then makes it available in a structured and comparable format. Publishing assessment reports, summaries of regulatory decisions, and information on adverse reactions increases the transparency of the decision-making process and allows for verification of its merits by the scientific community and the public. Such transparency fosters public trust in regulatory institutions, reduces the risk of arbitrary decisions, and strengthens the legitimacy of the healthcare system by demonstrating that decisions regarding the approval and supervision of medicines are based on specific data and established procedures.</p>



<p>Under current law, Article 57 of Regulation (EC) No 726/2004 of the European Parliament and of the Council of 31 March 2004 laying down Community procedures for the authorisation and supervision of medicinal products for human and veterinary use and establishing a European Medicines Agency is of particular importance. Article 57, which sets out the tasks and competences of the European Medicines Agency (EMA), is responsible for the scientific assessment of the quality, safety, and efficacy of medicines for humans and animals, coordinates the procedures for authorizing medicines to be placed on the market in the EU, conducts pharmacovigilance, collects and makes available information on medicines (including public databases), supports Member States, EU institutions, and marketing authorisation holders, and ensures cooperation and exchange of information at the EU and international levels.</p>



<p>The new transparency requirement regarding public financial support for research and development activities in the field of medicinal products aims to increase transparency in the drug development and market launch process and to enable the assessment of the impact of public funds on their development and commercialization. The essence of this obligation is to disclose information about direct and – in certain cases – indirect financial support from public or quasi-public funds, in particular EU funds, national research programs, grants, subsidies, reliefs, or other financing mechanisms that may have had a significant impact on the research, development, manufacturing, or marketing authorization of a medicinal product.</p>



<p>Currently, the EMA provides a wide range of information on medicinal products through public databases such as European Public Assessment Reports (EPARs) and clinical trial registries, which ensure transparency in the evaluation process, safety, and efficacy of medicines. However, the scope of data disclosed focuses primarily on scientific and regulatory aspects—including clinical trial results, therapeutic indications, approval decisions, and risk information—without systematically including financial data related to research and development (R&amp;D). In this context, the EMA could serve as a potential information provider, integrating existing regulatory resources with data on the funding sources for pharmaceutical innovation. This would foster greater transparency, a better understanding of the contribution of public and private funds to drug development, and strengthen stakeholder confidence in the European regulatory system.</p>



<p>In response to the current state of the pharmaceutical market, on 23 July 2025, an implementing regulation was published in the Official Journal of the EU, amending Implementing Regulation (EU) No 520/2012 on the performance of pharmacovigilance activities provided for in Regulation (EC) No 726/2004 of the European Parliament and of the Council and Directive 2001/83/EC of the European Parliament and of the Council. Article 57 of the proposed directive aims to ensure transparency regarding public and quasi-public financial support for activities aimed at researching and developing medicines. This is to be achieved by disclosing direct financial support that may have influenced the development and commercialization of these products.</p>



<p>The disclosure obligation is addressed in particular to marketing authorization holders and other entities responsible for conducting research and product development, including clinical trial sponsors, entities with capital or organizational ties, and – to the extent they participate in the R&amp;D process – research and academic institutions working in collaboration with the pharmaceutical industry. This obligation covers both support received directly by the entity and funds transferred through consortia, public-private partnerships, or other collaborative structures.</p>



<p>Disclosure of information should occur at a specific stage in the development cycle of a medicinal product, in particular in connection with the marketing authorization procedure, its amendments, or updates to the regulatory documentation, and, where appropriate, during post-market surveillance. The form of data disclosure should ensure its accessibility, comparability, and comprehensibility for regulatory authorities and the public, while respecting commercial confidentiality and the protection of sensitive data. As a general rule, this information should be provided through central EU systems and registries, allowing for public consultation or, in the case of restricted data, access by the relevant supervisory authorities.</p>



<p>Transparency in drug R&amp;D funding is not indifferent to the pharmaceutical market; it can be crucial in the context of pricing and reimbursement negotiations. Disclosure of information on public financial support received for drug development, as provided for in the proposed EU legislation, can help assess the true costs of innovation and the contribution of public funds to its development. Increased transparency can foster a more balanced relationship between public funding and the commercialization of innovations, and can facilitate social access to the results of publicly funded research.</p>



<p>Amendments to Article 26 of the Implementing Regulation clarify the electronic formats and standards used to submit information on medicinal products authorized for marketing in the Union, in particular by referencing the XEVPRM Communication and IDMP standards, in line with data published by the European Medicines Agency pursuant to Article 57(2) of Regulation (EC) No 726/2004. This regulation strengthens the EMA&#8217;s role as the entity responsible for maintaining reference datasets on medicinal products, while the scope of information submitted remains formally linked to regulatory, procedural, and pharmacovigilance data. Article 57 of Regulation 726/2004 specifies the Agency&#8217;s tasks primarily in the area of assessing the quality, safety, and efficacy of medicinal products and collecting and publicly making information in this regard available, including by maintaining an EU-wide database of medicinal products accessible to the public.</p>



<p>The new wording of Article 26 does not introduce separate substantive competences for the EMA, but rather refers to the Agency&#8217;s existing obligations regarding the publication and standardization of data on medicinal products. In this sense, it is possible to use existing IT systems and databases referred to in Article 57 as a reference point for other information obligations under EU law, as long as they remain based on the same product identifiers and data structures. However, this regulation does not prejudge the scope or nature of information beyond strictly regulatory data, including information on research and development funding, which may be subject to separate reporting requirements.</p>



<p>From a systemic perspective, linking the new technical requirements to data published by the EMA is consistent with the Agency&#8217;s role of ensuring the coherence and comparability of information on medicinal products at the Union level. Article 57 does not explicitly mandate the EMA to collect or share data on public financial support for research and development, but it does establish an institutional framework enabling the integration of various categories of information relating to the same medicinal product. The scope of this framework&#8217;s potential use for financial transparency purposes depends on detailed substantive provisions introduced in other legal acts and does not result directly from the technical changes under consideration in Article 26.</p>



<p>The changes introduced by Article 26 of the Implementing Regulation of 22 July 2025 amending Implementing Regulation (EU) No 520/2012 on pharmacovigilance activities provided for in Regulation (EC) No 726/2004 of the European Parliament and of the Council and in Directive 2001/83/EC of the European Parliament and of the Council generally include:</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="544" src="https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2-1024x544.png" alt="" class="wp-image-8669" srcset="https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2-1024x544.png 1024w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2-300x159.png 300w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2-768x408.png 768w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2-1536x816.png 1536w, https://www.kg-legal.eu/wp-content/uploads/2026/03/Obraz2.png 1967w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>After a theoretical analysis, mainly devoted to the legal aspects of the changes and the consequences of the introduced regulation, it is worth paying attention to what practical consequences this may have, among others, within the scope of conducted activities.</p>



<p>One of the most obvious areas of impact of the new regulations is the activities of marketing authorization holders (MAHs). These are entities that apply for or obtain marketing authorization for a given medicinal product and are then responsible for monitoring its use after it is launched on the market.</p>



<p>Once the new regulations are implemented, MAHs will be required to develop and maintain dedicated websites for each product marketed in the EU. This requires implementing appropriate operational procedures and submitting reports to an external audit. Each website should include, among other things, a summary of the financial support granted for product-related research and development activities, even if these activities were conducted by external entities prior to the MAH&#8217;s involvement.</p>



<p>From a practical perspective, introducing such transparency could significantly impact the operations of pharmaceutical companies. Member States strive to increase the availability of medicinal products, and transparency in research funding and drug launches could strengthen the position of regulators and payers. Consequently, pharmaceutical companies may feel greater pressure during pricing and reimbursement negotiations, as information about financial support provided will be publicly available and could influence the assessment of drug launch costs.</p>



<p>In practice, this means not only additional administrative and auditing obligations for MAH, but also careful planning of communication and negotiation strategies to ensure compliance with regulations while maintaining competitiveness on the market.</p>



<p>As Member States strive to maintain and improve access to medicinal products and introduce transparency in the scope of support covering the use of financial resources for introducing medicinal products to the market, this regulation may increase the negotiating pressure on pharmaceutical companies through awareness and transparency of subsidies during price and reimbursement negotiations.</p>



<p>Full transparency may raise debates on issues related to the protection of trade secrets, know-how, and the protection of business secrets regulated, among others, by Article 39 of the TRIPS Agreement. Significant doubts may arise regarding the risk of replicating competitors&#8217; R&amp;D strategies, which may raise doubts as to whether the proposed regulations contain sufficient clauses ensuring the protection of business secrets.</p>



<p>The proposed solution appears to be significant given the systemic approach to the application of EU law in relation to Article 107 TFEU. It seems that such disclosure of financing could facilitate the identification of unlawful state aid, which could, among other things, facilitate proceedings before the European Commission.</p>



<p>The introduction of the obligation to disclose information is part of strengthening social responsibility and consumer awareness regarding the implementation of ESG (Environmental, Social, Governance) standards.</p>



<p>Funding transparency can be classified as a governance element that reveals the relationship between the public sector and commercial entities. From a societal perspective, this transparency demonstrates that innovation is not solely related to private pharmaceutical risk but also the result of the collaboration of public funds, academic infrastructure, and private capital.</p>



<p>In the social context, transparency of financing can influence the perception of the pharmaceutical industry by the public, patients, and public payers. Disclosure of subsidies can reinforce public expectations for appropriate pricing policies, while companies themselves may perceive such transparency as an element of their repudiation strategy and non-financial reporting.</p>



<p>At the international level, the issue of transparency in the financing of pharmaceutical innovation also appears in documents from the World Health Organization (WHO) and the OECD, which emphasize the importance of transparency in ensuring fair access to medicines and rationalizing pricing policies. Against this backdrop, the European Union&#8217;s approach can be viewed as more systemic and integrated with the regulatory lifecycle of medicinal products. At the same time, the question arises whether expanded disclosure obligations will lead to a situation of so-called &#8220;over-transparency,&#8221; where companies operating in the EU market will be subject to more stringent disclosure obligations than their competitors in other regions of the world, which may be significant for the global competitiveness of the European pharmaceutical sector.</p>



<p>A particularly concerning issue related to the proposed regulatory changes is the lack of clarified enforcement strategies and legal consequences related to the new requirements. In practice, it may be necessary to determine which body – the EMA, the European Commission, or the relevant national authorities – will be responsible for verifying the compliance of disclosed data with the actual state of R&amp;D funding. The lack of clear regulations or sanctions regarding aspects of non-compliance with the obligation may weaken its practical significance. Regardless of the basis of formal legal sanctions, enforcement of the disclosure obligation may rely not only on legal instruments but also on market instruments or broadly understood social pressure.</p>



<p>The current legal regulations (de lege lata) provide the institutional and technical framework for collecting, standardizing, and publicly disclosing data on medicinal products, with the scope of this information remaining essentially limited to regulatory aspects related to the quality, safety, and efficacy of medicines. Article 57 of Regulation (EC) No 726/2004 provides a key legal basis for the EMA&#8217;s operation as a central information authority, but does not explicitly cover transparency obligations regarding research and development funding. From a de lege ferenda perspective, it can be argued that further development of transparency requirements, particularly regarding public financial support for R&amp;D, requires explicit support in substantive EU law, while leveraging the EMA&#8217;s existing data infrastructure and experience gained from implementing the tasks specified in Article 57. Therefore, the direction of potential legal changes may lie not in redefining the Agency&#8217;s role, but rather in gradually expanding the catalog of information related to medicinal products, while maintaining systemic coherence and a clear division of responsibilities between the EMA, the Commission, and the Member States.</p>



<p>Sources:<br />1. Regulation (EC) No 726/2004 of the European Parliament and of the Council of 31 March 2004 laying down Community procedures for the marketing authorisation and supervision of medicinal products for human and veterinary use and establishing a European Medicines Agency, OJ L 136, 30.04.2004, pp. 1–33.<br />2. Commission Implementing Regulation (EU) 2025/1466 of 22 July 2025 amending Implementing Regulation (EU) No 520/2012 on the performance of pharmacovigilance activities provided for in Regulation (EC) No 726/2004 and Directive 2001/83/EC, OJ EU L …, 23/07/2025.<br />3. Commission Implementing Regulation (EU) No 520/2012 of 19 June 2012 on the performance of pharmacovigilance activities provided for in Regulation (EC) No 726/2004 and Directive 2001/83/EC, OJ EU L 159, 20.06.2012, pp. 5–25.<br />4. European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down Union marketing authorisation procedures for medicinal products for human use and the functioning of the European Medicines Agency, COM(2023) 193 final, EUR-Lex: Biznes.gov.pl, information on support for research and development activities and instruments for financing innovation, https://www.biznes.gov.pl/pl/portal/004268<br />5. European Medicines Agency, European Public Assessment Reports (EPAR), https://www.ema.europa.eu/en/medicines<br />6. Public contributions to R&amp;D of medical innovations: A framework for analysis Claudia Wild * , Ozren Sehic , Louise Schmidt, Daniel Fabian, Austrian Institute for Health Technology Assessment (AIHTA), Vienna, Austria<br />7. Öffentliche Beiträge zur Arzneimittelentwicklung &#8211; Claudia Wild und Daniel Fabian</p>


<p>Artykuł <a href="https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/new-transparency-requirement-for-public-financial-support-received-for-drug-research-and-development/">New transparency requirement for public financial support received for drug research and development</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kg-legal.eu/info/pharmaceutical-healthcare-life-sciences-law/new-transparency-requirement-for-public-financial-support-received-for-drug-research-and-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>EMA and FDA set common principles for AI in medicine development – January 2026</title>
		<link>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/</link>
					<comments>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/#respond</comments>
		
		<dc:creator><![CDATA[jakub]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 15:15:56 +0000</pubDate>
				<category><![CDATA[IT, NEW TECHNOLOGIES, MEDIA AND COMMUNICATION TECHNOLOGY LAW]]></category>
		<category><![CDATA[EMA]]></category>
		<category><![CDATA[EMA and FDA]]></category>
		<category><![CDATA[FDA]]></category>
		<guid isPermaLink="false">https://www.kg-legal.eu/?p=8629</guid>

					<description><![CDATA[<p>Publication date: February 12, 2026 In recent years, the importance of artificial intelligence (AI) in drug development, evaluation, and monitoring has grown significantly. AI technologies have the potential to accelerate research, improve predictions of drug efficacy and safety, and reduce the need for animal testing. At the same time, their use presents new challenges. AI [&#8230;]</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/">EMA and FDA set common principles for AI in medicine development – January 2026</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color"><strong>Publication date: February 12, 2026</strong></mark></p>



<p>In recent years, the importance of artificial intelligence (AI) in drug development, evaluation, and monitoring has grown significantly. AI technologies have the potential to accelerate research, improve predictions of drug efficacy and safety, and reduce the need for animal testing. At the same time, their use presents new challenges. AI models can make errors, be susceptible to unforeseen risks, or use data in a non-transparent manner. To fully realize the benefits of AI while minimizing risks, it is essential to establish clear and common principles for the use of these technologies. In response to these challenges, <strong>the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have jointly developed ten principles of good practice for the use of AI in the drug lifecycle</strong>. This document is fundamental and a framework, not a binding legal regulation – it provides general directions and guidelines that should guide drug manufacturers, applicants, and regulators. These principles indicate how AI should be designed and used to ensure it is ethical, safe, transparent, and based on reliable data. The ten principles also identify areas where international regulators, standards-setting organizations, and other collaborating entities can work together to promote good practice in drug development. These areas of collaboration include: conducting scientific research, creating educational tools and resources for market participants, international harmonization, and developing consensus standards. To facilitate initial analysis, these principles can be grouped into three logical pillars. Principles 1-3 address organizational foundations and people, focusing on interdisciplinary team expertise and ensuring that AI remains under human control within specific, established governance processes. Principles 4-7 address technical quality and model integrity, addressing the &#8220;heart&#8221; of the technology. Principles 8-10 address accountability and lifecycle, defining standards for documentation, clear communication with users, and continuous monitoring of the model after its implementation. Below is a detailed summary of the 10 principles of good practice for AI in the drug lifecycle:</p>



<span id="more-8629"></span>



<p><strong>1. Human-Centered Design. </strong>The development and use of AI technologies in the drug development lifecycle should be consistent with ethical values and human-centered. The ethical principles cited in the EMA and FDA documents do not constitute a standalone normative framework; rather, they were drawn from earlier standards for the protection of fundamental rights and bioethics and then incorporated into the &#8220;Trustworthy AI&#8221; framework developed by the High-Level Expert Group on AI (HLEG). The Assessment List for Trustworthy AI defines seven fundamental requirements for trustworthy AI, which provide a practical tool for implementing ethical values in AI systems. These principles assume that an AI system should support human decisions, enable human intervention, and not make decisions autonomously, which is directly related to the premise of &#8220;human-centeredness.&#8221; Another requirement is the AI&#8217;s technical resilience to errors, failures, and attacks, ensuring the system&#8217;s security and predictability. From an ethical perspective, it is also crucial that all collected and processed data comply with applicable law, and that the system&#8217;s decision-making processes remain fully verifiable. AI implementation should consider the potential risks associated with its use and provide mechanisms for oversight, verification, and preventive measures to minimize undesirable consequences. The document also emphasizes that AI systems must not contribute to exacerbating existing prejudices or discrimination, but should instead promote equality, justice, and the well-being of people and the environment. A final important principle is accountability – individuals and organizations that design, implement, and use AI systems are responsible for their performance and consequences.</p>



<p>The AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council) contains many similar terms, such as &#8220;human-centric&#8221; and &#8220;trustworthy AI&#8221;, although the act itself does not provide a formal definition. This term appears repeatedly in the preamble and in Article 1, allowing for interpretation of its meaning in regulatory and practical contexts. The AI Act establishes a legally binding framework for AI systems in the EU, including prohibitions on certain practices, transparency obligations, and risk management requirements for high-risk systems. Therefore, conceptual overlap can be observed between the HLEG/EMA/FDA guidelines and the AI Act, although they serve different functions – the guidelines provide ethical and design direction, while the AI Act defines legal obligations.</p>



<p><strong>2. Risk-Based Approach</strong>. The development and use of AI technologies follows a risk-based approach, with proportionate validation, risk mitigation, and oversight based on context of use and model-specific risk. Beginning the analysis with the first premise, namely the &#8220;risk-based approach,&#8221; this model is known from the AI Act. It identifies four risk levels for AI systems: unacceptable risk, high risk, transparency risk, minimal risk, or no risk. Article 5 of the AI Act introduces a catalog of prohibited practices, including AI systems whose use is deemed unacceptable due to a threat to fundamental rights. In the context of the use of AI in drug research, prohibitions relating to the protection of individual autonomy and vulnerability should be specifically mentioned, particularly the prohibition on the use of manipulative systems and systems exploiting the specific vulnerabilities of clinical trial participants. Moving on to the high-risk category, AI systems used in the development of medicinal products are generally not classified as high-risk systems under Annex III of the AI Act, but may be deemed so under Article 6(1) if they constitute a &#8220;safety-related component&#8221; of a sectorally regulated product and are subject to mandatory conformity assessment before marketing or use. The term &#8220;security-related component&#8221; refers to a component whose failure or malfunction could pose a threat to the health, safety, or security of individuals, or property. Systems explicitly listed in Annex III are also considered high-risk AI systems, including remote biometric identification systems, crime risk assessment systems, and systems that make decisions that significantly impact an individual&#8217;s legal status. The legislator has provided for the possibility of exempting certain systems from this category if they do not pose a significant risk to health, safety, or fundamental rights and do not significantly influence the outcome of the decision-making process. This exemption requires a documented self-assessment by the provider and is subject to review by the competent national authorities. For limited-risk AI systems, the legislator primarily provides for transparency obligations aimed at preventing users from being misled. This includes, among other things, the obligation to disclose that the system is based on AI and to indicate its limitations. The category of minimal-risk systems includes all other AI systems that do not fall into any of the above-mentioned groups. For these systems, the AI Act does not impose specific regulatory requirements, leaving them free to design and implement them.</p>



<p>The reference to &#8220;proportionate validation&#8221; should be understood as a consequence of the AI Act&#8217;s adoption of a risk-based approach. This means that the scope, intensity, and formalization of validation processes for AI systems should be tailored to the level of risk the system poses to health, safety, or fundamental rights. In other words, the scope of validation, oversight, and requirements for an AI system depend on both the level of risk posed by the model and the context of its use. The greater the potential risk, the more stringent the requirements.</p>



<p><strong>3. Compliance with standards. </strong>AI technologies must comply with applicable legal, regulatory, technical, and ethical standards, including the principles of good practice in pharmacy and data protection. Legally, this includes, among others, Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (AI Act), which establishes obligations for AI system operators, prohibitions on certain practices, and requirements for high-risk systems, as well as the personal data protection provisions of the GDPR, anti-discrimination directives, and national regulations on clinical trials and the marketing of medicinal products. Regulatory and good practice standards include Good Manufacturing Practice (GMP), Good Clinical Practice (GCP), ICH guidelines, and the use of regulatory sandboxes, which enable testing of new technologies in a controlled environment while maintaining the safety of patients and researchers. Technically and ethically, the aforementioned HLEG/ALTAI guidelines for trustworthy AI must be taken into account. Additionally, compliance with ISO standards for information security and validation of medical algorithms ensures the technical consistency and reliability of AI systems in the pharmaceutical context.</p>



<p><strong>4. Clear context of use</strong>. Every AI system must have clearly defined objectives and a precisely defined scope of application, as reflected in the ISO/IEC 42001 standard. This standard requires organizations to document the context in which the system is used, specify the decision-making processes it supports, and identify the risks arising from its specific nature. This requirement is inextricably linked to the qualification of the risk level under the AI Act. According to Article 6 of this regulation, a precise definition of the system&#8217;s functions and its potential impact on health and property is necessary to determine whether AI constitutes a &#8220;safety-related element of a product&#8221;. High-risk systems must have a transparently defined purpose, which is a sine qua non for conducting a reliable compliance assessment and implementing adequate oversight.</p>



<p><strong>5. Multidisciplinary expertise</strong>. Establishing the operational framework for the system requires multidisciplinary expertise, which should be integrated into the project throughout the technology lifecycle. This principle reflects the complexity of contemporary solutions, particularly general-purpose AI (GPAI) models. It is worth noting that their characteristics in the pharmaceutical regulatory guidelines explicitly refer to the legal definition contained in Article 3(63) of the AI Act, which describes models characterized by large scale, the ability to learn from diverse data, and competence in performing a wide range of tasks. According to the EMA/FDA, collaboration between specialists in AI technology, biology, pharmacy, law, and ethics is not merely a formality but a necessary condition for ensuring model reliability. A multidisciplinary approach guarantees higher quality input data, correct interpretation of results in the specific clinical context, and full consideration of the regulatory environment. This concept aligns with the AI Act&#8217;s classification of &#8220;high-impact&#8221; models, where systems with a significant potential to generate systemic risks are subject to special scrutiny. In pharmaceutical practice, the involvement of experts from various fields allows us to create a model that not only complies with the law, but above all works effectively and safely in real medical applications.</p>



<p><strong>6. Data and Documentation Management</strong>. Another pillar of technology security is data management and documentation, which must be transparent and verifiable throughout the drug lifecycle. According to EMA and FDA guidelines, every stage of data processing and every analytical decision must be documented in a way that allows for full reconstruction of events. In pharmaceutical practice, this means maintaining the ALCOA++ standard, which has evolved from the original five principles to the current set of ten attributes, including completeness, consistency, and durability of the record. This documentation cannot be limited to final results; in accordance with GxP requirements, it must encompass the entire &#8220;data engineering&#8221; process, from the original source to the final input into the AI model. This is crucial in the context of regulatory audits, as an analysis of FDA activities indicates that as many as 80% of warning letters regarding data integrity issued in recent years resulted from gaps in this area. Applying the ALCOA++ standard in the age of artificial intelligence requires healthcare entities to implement advanced audit trails that record every modification. Inspectors such as the EMA and the national Chief Pharmaceutical Inspectorate (GIF) currently expect not only system logs but also proactive and systematic review of these logs to detect potential manipulation or human error. This process must also consider the protection of privacy and sensitive data, which links GxP requirements with GDPR obligations. In this context, it is particularly important that the data be &#8220;persistent&#8221; and &#8220;available.&#8221; Such supervision ensures the credibility and verifiability of data obtained using AI. Whereas is should be noted that the term &#8220;GxP&#8221; is an umbrella term for specific regulated sub-areas, such as: GMP = Good Manufacturing Practice; GSP = Good Record Keeping Practice; PKB = Good Distribution Practice; GEP = Good Engineering Practice; GAMP = Good Automated Manufacturing Practice.</p>



<p><strong>7. Design and Development of Practice Models. </strong>The seventh principle is a technical confirmation that the AI system was not created haphazardly, but was built according to rigorous engineering standards. In pharmaceutical practice, this primarily means applying the GAMP 5 standard, which requires that each algorithm function be tested and verified before use (validation). Because AI systems have the ability to continuously learn, this principle also introduces modern oversight (MLOPs), which acts as a quality monitor. This protects the model from losing its effectiveness after it leaves the laboratory and reaches hospitals. A key element of safe design is the selection of data that is &#8220;fit for purpose.&#8221; This means that the model cannot learn from random information. This information must be representative, reflecting the diversity of patients (e.g., in terms of age, gender, or ethnicity), which prevents the development of erroneous algorithmic biases. This gives the model generalizability, ensuring that it will perform safely on every new patient, not just on the narrow group of individuals on whom it was trained. Regulators (EMA/FDA) prioritize moving away from &#8220;black box&#8221; models toward explainability (Explainable AI &#8211; XAI), which finds its technical support in the ISO/IEC 23894 standard. As part of risk management, this standard requires that the system be able to present logical rationale for its decisions. This means that the algorithm must indicate which specific medical parameters prevailed in a given clinical assessment, allowing the physician to substantively verify the result. This vision is complemented by the ISO 9241 (Human-Centered Design) standard. In medical AI, HCD is not understood as interface aesthetics, but as a security architecture that counteracts the phenomenon of thoughtless submission to machine suggestions. In accordance with the principles of this standard, such as error tolerance and controllability, the system design must minimize the effects of human error and guarantee the user the ability to override AI suggestions at any stage. The principle of self-descriptiveness, in turn, requires the system to clearly communicate its state and confidence limits, which directly addresses the technical robustness requirement stipulated in the EU AI Act. Ultimately, this ensures that the entire decision-making process of the algorithm is fully verifiable and trustworthy.</p>



<p><strong>8. Risk-based performance assessment. </strong>Risk-based performance assessments evaluate the entire system, including human-AI interactions, using data and metrics appropriate to the intended context of use, supported by predictive performance validation through appropriately designed testing and evaluation methods. Although the concept of a &#8220;risk-based approach&#8221; was discussed in detail in Principle 2, in the context of classifying systems under the AI Act, under Principle 8 it has a more operational dimension. In short, we no longer ask whether a system is risky, but rather to what extent we need to test it to meet safety requirements. A key element of this principle is defining and monitoring human-AI interaction. According to Article 14 of the AI Act, high-risk systems must be designed to enable effective human oversight. In pharmaceutical practice, this means creating a mechanism for continuous learning under human oversight, where the human is not just a passive recipient of the result, but an active operator filtering the algorithm&#8217;s suggestions. This model of collaboration allows for the verification of AI decisions and effectively counteracts the phenomenon of over-trust, in which medical personnel could uncritically accept erroneous system recommendations. A reliable performance assessment also requires the implementation of failure mode analysis. Instead of focusing solely on confirming the model&#8217;s predictive effectiveness, AI implementers must deliberately identify the system&#8217;s weaknesses and moments when the algorithm may miss safety signals (e.g., rare adverse drug reactions).</p>



<p>Referring to Article 15 of the AI Act is crucial here, as it imposes the obligation to ensure a high level of robustness and accuracy. In practice, this means that system validation cannot be limited to simulations under ideal conditions. It must include testing how the system responds to intentionally erroneous, incomplete, or unusual medical cases. In this context, the concept of data appropriate for use takes on a new definition. Looking at the SPIFD methodologies and FDA guidelines, data &#8220;appropriateness&#8221; should be understood as a selection process based on two specific parameters: reliability and relevance. Reliability does not refer to the substantive content itself, but rather to the technical reliability of the source; the researcher must prove that the data is consistent and complete. Relevance, in turn, requires the researcher to answer the question of whether the dataset (often derived from real-world data) actually represents the target population and contains the variables necessary to answer a specific clinical question.</p>



<p><strong>9. Lifecycle Management. </strong>Risk-based quality management systems are implemented throughout the AI technology lifecycle, specifically to identify, assess, and respond to emerging issues. AI technologies are subject to planned monitoring and periodic reassessment to ensure their proper functioning, for example, in the context of changes in input data. Therefore, we are not talking about a single validation, but rather continuous oversight.</p>



<p><strong>10. Clear and meaningful information. </strong>Results generated by AI should be presented in a simple and understandable way, so that users and patients can truly understand their meaning, significance, and limitations. In this context, a reference to Article 13 of the AI Act, which explicitly imposes the requirement of transparency, may be helpful. The regulation imposes a specific requirement to share specific parameters, while the EMA/FDA principle emphasizes language and communication. How is &#8220;clear language&#8221; understood? This is not an empty phrase, but a specific requirement based on standards such as ISO 24495-1 (Plain Language). In pharmaceutical and clinical practice, this means moving away from hermetic vocabulary toward messages accessible to the &#8220;average citizen.&#8221; Not only is style important, but also the structure of the text itself. More specifically, long, complex texts should be avoided. Instead, it is recommended to use short sentences, preferably in the active voice, avoiding the passive voice. Ultimately, clarity of communication comes down to explaining what a given result means in practice, currently in a given case.</p>



<p>The above analysis aimed to dissect the general EMA/FDA principles and demonstrate the specific technical and legal requirements underlying each term. This is an attempt to clarify the general terminology and demonstrate that each of these 10 principles has a technical equivalent that must be met for an AI model to be considered safe and reliable in a regulated environment.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Principle</strong><strong></strong></td><td><strong>Explanation</strong><strong></strong></td></tr><tr><td><strong>1. Human-Centered Design</strong></td><td>Moving from ethics to practice Trustworthy AI; the system must support human decision-making (Human-in-the-loop) and be resilient to attacks and errors.</td></tr><tr><td><strong>2.Risk-based approach</strong></td><td>Classification of the system under the AI Act; the scope of validation and oversight depends on whether the AI is a critical &#8220;safety element&#8221; of the medicinal product.</td></tr><tr><td><strong>3. Compliance with standards</strong></td><td>Combining the new legal framework with classic pharmaceutical practices: GxP, ICH and ISO standards to ensure full legality and quality.</td></tr><tr><td><strong>4. Clear context of use</strong></td><td>Obligation to document intended use (ISO/IEC 42001); precise definition of where the model&#8217;s competence ends and the risk of error begins.</td></tr><tr><td><strong>5. Multidisciplinary expertise</strong></td><td>Collaboration between IT, medicine and law as a filter for GPAI models; guaranteeing that the technical result will be correctly interpreted clinically.</td></tr><tr><td><strong>6.Data and documentation management</strong></td><td>Maintaining data integrity according to ALCOA++; full audit trail allowing for the replication of every decision and model modification.</td></tr><tr><td><strong>7. Design and development of practice models</strong></td><td>Moving from black boxes to explainability (XAI); using GAMP 5 engineering and MLOps post-implementation monitoring.</td></tr><tr><td><strong>8.Risk-Based Performance Assessment</strong></td><td>Testing for resistance to data errors (Art. 15 AI Act ) and selecting data in terms of their reliability and relevance to the patient population.</td></tr><tr><td><strong>9.Lifecycle Management</strong></td><td>Replacing one-time validation with continuous supervision; systematically detecting model quality degradation (drift) in real time.</td></tr><tr><td><strong>10.Clear and relevant information</strong></td><td>Translating statistics into Plain Language (ISO 24495-1); using short active sentences to facilitate quick medical decisions</td></tr></tbody></table></figure>



<p>Full text of FDA and EMA Guidelines can be accessed here:</p>



<p><a href="http://www.ema.europa.eu/en/documents/other/guiding-principles-good-ai-practice-drug-development_en.pdf">www.ema.europa.eu/en/documents/other/guiding-principles-good-ai-practice-drug-development_en.pdf</a></p>



<h2 class="wp-block-heading">Sources:</h2>



<p>European Medicines Agency &amp; US Food and Drug Administration. (2026, January 14). Guiding principles of good AI practice in drug development (EMA/FDA joint principles). European Medicines Agency.</p>



<p>High-Level Expert Group on Artificial Intelligence. (2020). Assessment List for Trustworthy AI (ALTAI). European Commission.</p>



<p>Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act).</p>



<p>Jędrzejczyk Maria (ed.), Szoszkiewicz Lukasz (ed.), Wydra Jędrzej (ed.), AI Act, Artificial Intelligence Act. Commentary</p>



<p>Quanticate, The ALCOA++ Principles for Data Integrity in Clinical Trials, August 28, 2025.</p>



<p>Explainable AI (XAI) – the key to understanding artificial intelligence, Kasia Szczesna. 05.12.2024.</p>



<p>IQVIA, Blog :Understanding AI, Data and Human Interaction in Pharmaceutical Development, Mike King, Senor Director of Product &amp; Stratgy, IQVIA, 06/02/2024.</p>



<p>National Library of Medicine &#8211; The Structured Process to Identify Fit-For-Purpose Data: A Data Feasibility Assessment Framework.</p>
<p>Artykuł <a href="https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/">EMA and FDA set common principles for AI in medicine development – January 2026</a> pochodzi z serwisu <a href="https://www.kg-legal.eu">KIELTYKA GLADKOWSKI LEGAL | CROSS BORDER POLISH LAW FIRM RANKED IN THE LEGAL 500 EMEA SINCE 2019</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/ema-and-fda-set-common-principles-for-ai-in-medicine-development-january-2026/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
