KG LEGAL \ INFO
BLOG

National Healthcare and the processing of personal data by means of AI

Publication date: November 12, 2025

Artificial intelligence (AI) is currently finding widespread use in healthcare. A prime example is the Polish National Health Fund (NFZ) initiative, which utilizes AI to analyze patient data stored in the Fund’s databases. This data is then analyzed with the support of machine learning tools to make strategic decisions regarding the health of Poles. This approach will certainly simplify the work of doctors by searching for and analyzing the desired information, undoubtedly reducing their workload. However, such a solution may raise several issues and legal requirements related to regulations regarding the protection and processing of personal data.

What is personal data?

The most common definition of personal data is contained in the EU Regulation 2016/679 (GDPR), according to which personal data is any information about an identified or identifiable natural person. This includes direct identification (e.g., name and surname) or certain factors allowing indirect identification (e.g., job description or nationality). This concept is expanded by the Polish Act on the Protection of Personal Data Processed in Connection with the Prevention and Combating of Crime of December 14, 2018, by applying it directly to health. Health data here means personal data relating to the physical or mental health of an individual, including data on the use of healthcare services that reveal information about their health.

Patients’ rights

A number of patient rights are listed in the EU Regulation 2025/327 (Regulation on the European Health Data Space). The fundamental right is the right of individuals to access their electronically collected data (especially “priority data,” e.g., electronic prescriptions or imaging test results), which should be granted immediately after data is registered in the system. Individuals can also add their own information to their data already visible in the system and correct it. Furthermore, patients can grant access or request the transfer of their data to another provider. Access to healthcare professionals can also be restricted (however, in such cases, the patient should also be informed of the potential impact of such action on the quality of care provided). In this case, institutions collecting patient data should be aware of these rights, because if they are not respected, the patient could file a complaint (provided, however, that the rights or interests of the individual are adversely affected) and demand appropriate compensation.

The EU GDPR also provides similar rights, which additionally provides for one crucial privilege: the right to object. According to this regulation, an individual may object at any time to the processing of their data, including in connection with the performance of healthcare tasks, for reasons relating to their particular situation. In such a case, the data may no longer be processed unless the controller demonstrates compelling and legitimate grounds for further processing. Under the regulation, a patient could also request the deletion of their personal data if, for example, they are no longer necessary for the purpose for which they were collected or if they were processed unlawfully. Furthermore, the regulation also provides for the possibility of imposing an administrative fine of up to €20 million for a controller’s violation of guaranteed rights. Furthermore, Article 79 of the GDPR grants the right to an effective judicial remedy if the individual (patient) believes that the processing of their personal data violated the law.

Obligations of entities storing and processing data

Pursuant to Article 24 of the GDPR, the data controller is obligated to implement appropriate technical and organizational measures to ensure data processing is carried out in compliance with legal provisions and the rights and freedoms of others. The controller must also review and update these measures as necessary. In the case of AI-based patient data processing, the obligation specified in this article to design the measures described above is also crucial, ensuring that only information necessary to protect the patient’s life and health is processed by default. In the event of a personal data breach, the controller should (within 72 hours of becoming aware of the breach) notify the relevant supervisory authority of the personal data breach. However, the controller is not obligated to do so if the likelihood of a breach affecting the rights and freedoms of natural persons is low. If the risk of a breach is high, the controller should also notify the affected individual. Furthermore, before processing begins, even using new technologies (including AI), if it may result in a high future risk to the rights and freedoms of natural persons, it will be necessary to assess the impact of the planned processing on personal data protection. If such an assessment indeed reveals a high risk, and if the controller fails to implement any measures to mitigate it, the controller must contact the relevant supervisory authority (in Poland, the President of the Personal Data Protection Office [President of the UODO]), which then provides the controller with a written recommendation and may also temporarily restrict or prohibit processing or issue a warning to the controller. General obligations, according to which personal data must be processed lawfully and fairly, in a transparent manner, and limited to what is necessary for the purposes for which they are processed, are also important.

In this situation, Regulation 2024/1689 (“AI Act”) also provides an interesting requirement. According to Article 4 thereof, healthcare entities using AI systems to make strategic decisions about patients are responsible for maintaining an appropriate level of AI competence among their staff, taking into account the purpose of using the system and the persons for whom the systems are to be used.

Requirements for the AI systems themselves

The basic requirements that AI systems used for data processing would have to meet are set out in the aforementioned AI Act. This document explicitly classifies AI systems as “high-risk systems”, and therefore, the requirements set out in the act apply to them. Primarily, this requires maintaining appropriate documentation for the system: technical documentation regarding the quality management system (including data acquisition, collection, analysis, and labeling) and an EU declaration of conformity confirming the system’s compliance with the requirements set out in the regulation. Furthermore, this documentation should be kept at the disposal of the competent national authorities for 10 years after the system’s commissioning. Such systems should also meet transparency requirements, meaning they should be designed to facilitate proper use and interpretation of their actions (they should also have clear operating instructions). They must also have an appropriate oversight system that allows for human oversight of the AI if necessary, especially if its operation were to get out of control and harm others. Finally, AI systems are also subject to certain formal requirements, such as undergoing a pre-market conformity assessment and registering in a dedicated EU database for high-risk AI systems. It is also important to remember that high-risk AI systems are subject to general CE marking regulations. Once placed on the market, suppliers are required to establish a post-market monitoring system for AI systems to ensure their legal compliance.

The issue of non-personal data

It is also worth raising the issue of non-personal data, for example, in the context of a situation where a hospital, in order to decide on the appropriate medication for a patient, requests information about certain medications from a pharmacy. A public sector body may request such information only to the extent that the lack of this data would prevent it from performing its public interest tasks or when the body has no other available means of obtaining such data. A request for this purpose should be submitted (specifying, in particular, the purpose for which the information is requested). However, the data subject who received it may refuse to provide it if they have no control over the requested information or if a similar request for the same purpose has already been submitted by another public sector body. Once the requested information is in the possession of the requester, they must not use it in a manner inconsistent with the purpose for which the data was provided. They must ensure measures to protect its confidentiality or integrity, and they must delete the data as soon as it is no longer needed for the specified purpose. They are also prohibited from using the information obtained to improve a competitive product or from disclosing any information in this regard to third parties. It is also important to bear in mind the right of a public sector body to share the data received with individuals or organisations for the purposes of scientific research or analyses consistent with the purpose for which the data was requested, or with national statistical offices (e.g. the Central Statistical Office). It is important here that these organisations do not have a commercial nature or are not related to entities that do.

The status in Poland

As mentioned above, Poland has established a special supervisory authority for personal data protection, the President of the Personal Data Protection Office (UODO), acting with the assistance of the Office for Personal Data Protection. Among other things, this authority is responsible for consultations on data processing that poses a significant risk of violating the rights of others. It also conducts proceedings in cases of violations of personal data protection regulations and establishes a plan for monitoring compliance with these regulations.

It is also worth remembering the regulations of the Polish Act on Patients’ Rights and the Patient Ombudsman. It stipulates that patients have the right to access medical records concerning their health and the services provided to them. The entity storing this documentation is obligated to disclose the data contained therein only to the patient themselves or an authorized person (or, for example, to a university or research institute for scientific purposes, but without any data allowing for the identification of the individual). Furthermore, the entity providing services is obligated to retain medical records only for a specified period (generally 20 years), after which they should be destroyed in a way that prevents the identification of the patient to whom they pertained. The Act on the Healthcare Information System also limits access to these records to medical professionals and physicians.

Also important are the provisions of the Act on the computerization of the activities of entities carrying out public activities, under which an entity maintaining a public register (i.e. any type of records used to carry out public tasks based on the relevant provisions) should provide another public entity with access to the data in its possession to the extent necessary to carry out public tasks.

It is also important to remember the Polish Code of Medical Ethics, which, in Article 14, requires physicians to inform patients about the benefits and risks associated with proposed diagnostic procedures and, where appropriate, about the possibility of using other methods. Furthermore, according to Article 12, the use of AI in treatment may only occur after the following conditions are met: informing the patient that artificial intelligence will be used in the diagnosis or therapeutic process; obtaining the patient’s informed consent to the use of artificial intelligence in the diagnostic or therapeutic process; and using AI algorithms that are approved for medical use and have the appropriate certifications. However, the final decision always rests with the physician.

A government draft legislation is currently being prepared, which will be designed to adapt the national legal system to the requirements imposed by the AI Act. The government’s proposals primarily envisage the establishment of the Artificial Intelligence Development and Security Commission, which will oversee the AI market within the scope specified in Article 2 of Regulation 2024/1689. The second main body will be the President of the Personal Data Protection Office (UODO) that will oversee high-risk AI systems, including those related to healthcare.

Summary

Processing personal data for healthcare purposes, additionally supported by artificial intelligence, is undoubtedly a convenient and practical solution, but it is associated with a number of legal obligations intended to ensure the security of the data used (e.g., using the acquired data only for a strictly defined purpose), the security of patients themselves (e.g., the obligation to inform the patient of the intention to use artificial intelligence in the treatment process), or simply related to formalities (e.g., the requirement to register the artificial intelligence system in an EU database). Currently, EU regulations are much more detailed in this matter.

UP