Publication date: April 15, 2025
Artificial intelligence and machine learning in medicine are areas that have gained importance in recent years. AI algorithms have been used to support clinical decision-making and image analysis. Doctors receive suggestions on the choice of therapy, drugs and the assessment of potential side effects.
In the area of medical imaging, machine learning is used to analyze computed tomography, X-rays and magnetic resonance imaging to identify pathological changes that could be missed by radiologists. Studies show that artificial intelligence algorithms achieve results comparable to, and in many cases even better than, the best diagnosticians. The effect of these actions is a reduction in the percentage of false positive diagnoses that indicate the presence of a disease, which in turn reduces unjustified anxiety among patients. Scientists from the United States have confirmed that decision-support tools can reduce the percentage of false positive diagnoses and improve drug dosing. Experts predict significant savings in the healthcare sector, provided that solutions based on artificial intelligence are consistently implemented. However, the implementation of AI in medicine raises numerous legal and ethical challenges that require special attention, especially in the context of medical data protection.
Electronic Medical Record (EMR)
In Poland, the Electronic Medical Record (EMR) has been under construction for over a decade – an integrated system that collects patient health data. In accordance with the adopted regulations, from July 1, 2021, every doctor and medical facility is required to report medical events in this system. However, many healthcare units previously collected such information independently.
Types of medical data used to train algorithms:
1. Data from electronic health records.
2. Data obtained as a result of diagnostic tests, including medical images such as X-rays and magnetic resonance images (MRI).
3. Genotype data, including DNA sequencing.
4. Data collected by wearable devices, such as smartwatches and blood pressure monitors.
We often encounter a situation in which a patient’s treatment has already ended, and the results of their tests are still in the hospital or clinic database. Even if the data regarding a specific case are no longer applicable, comparing them with the results of other patients struggling with similar health problems can allow doctors to notice certain patterns and regularities in the development of the disease. Thanks to this, it will be possible to improve treatment in the future.
Protection of privacy in view of medical data
Data must undoubtedly be legally protected from unauthorized persons. The management of personal data protection is the Data Protection Officer (DPO). His role is not limited to monitoring compliance with the GDPR, he should also be well-versed in such aspects as:
– risk analysis (the Data Protection Officer should conduct detailed risk analyses related to the use of AI in data processing. Understanding potential threats allows for the introduction of appropriate safeguards),
– transparency of algorithms (Overseeing the clarity and transparency of AI algorithms that may influence decisions about individuals is crucial. The DPO should ensure that users are aware of how and why their data is being processed),
– the right to explanations (Entities processing data must be ready to provide information on decisions made by AI systems. The DPO should ensure that data subjects have the opportunity to obtain explanations).
View of Polish Patient Rights Ombudsman on use of AI in patient treatment
The Polish Patient Rights Ombudsman pointed out that legal regulations do not prohibit the use of artificial intelligence in the process of treating patients. He also emphasized that the doctor is responsible for the diagnosis, implemented treatment and monitoring the patient’s health.
His words are confirmed by Article 12 of the Code of Medical Ethics. The use of artificial intelligence algorithms in medicine is permissible under four conditions:
– informing the patient about the use of AI in the diagnosis or therapy process;
– obtaining the patient’s informed consent to the use of AI;
– using AI algorithms that have been approved for medical use and have the appropriate certificates;
– making the final diagnostic and therapeutic decision by the doctor.
It is currently assumed that artificial intelligence, which is essentially a machine, i.e. a product of human creativity, is not able to have legal capacity. Therefore, the actions of artificial intelligence cannot be attributed to it as a legal entity. Artificial intelligence does not have a legal identity, which prevents it from being sued or being sued. Just as it cannot be recognized as the author of the works and solutions it has created, it also cannot be held responsible for its actions.
Patient rights are regulated by various regulations, including the Civil Code, the Code of Civil Procedure, the Penal Code and the Act on Patients’ Rights and the Patient Rights Ombudsman. The aforementioned General Data Protection Regulation (GDPR) becomes key in this topic. It addresses issues such as:
– regulations on the processing of personal data in a medical context,
– patient consent as the foundation for the legality of data processing. It must be conscious, voluntary and expressed in a clear manner,
– patient rights: the right to access data (therefore algorithms must be transparent), the right to rectify them, the right to delete data, i.e. to “be forgotten” and the right to transfer data.
When looking for answers to questions about the security of data used by algorithms, it is worth familiarizing with the Act of 28 April 2011 on the health care information system. It covers issues such as: the principles of transferring data to the information system (Art. 32), access to processed data (Art. 35), data security (Art. 37) and the announcement of maintaining registers, records and medical data sets (Art. 53).
At European level, a regulation on the establishment of a European Health Data Space (EHDS) has recently been in force. This regulation may revolutionise the way in which European citizens and doctors use medical data. The EHDS will ensure that every patient in the European Union will have easy, secure and immediate access to their medical records, both in their country of residence and in any other Member State. Patients will be able to decide for themselves who and to what extent they grant access to their electronic health records (EHDS). They will also have immediate and free access to their medical records from anywhere in the EU, regardless of their current location. Patients’ health data will always be available, and doctors in any EU Member State will be able to quickly review the patient’s health history, which will significantly facilitate the process of diagnosis and treatment. The patient will independently choose what information to share and with whom, and all data will be protected in accordance with the highest security standards.
Data encryption
Due to the GDPR, medical facilities must apply appropriate personal data protection measures. This may include access control to IT systems, auditing user activities and implementing emergency procedures to ensure the security of patient data. Data anonymization (removing personal information to avoid user identification), encryption (transforming data into an unreadable form for unauthorized persons) and secure storage (using appropriate physical and digital security measures for personal data) are also key. Undoubtedly, the best way to ensure a high level of security and prevent costly data breaches is to use encryption. Encryption can be implemented in the form of both hardware and software solutions. Nevertheless, this hardware approach, which does not require software support, is the most effective means of protecting data from unauthorized access to private healthcare information. Encrypted USB drives are a good and easy-to-use solution that effectively protects personal data from unauthorized access, especially in situations requiring their transfer.
Commercialization of trained algorithms
Commercialization of trained algorithms for analyzing medical data can be done in different ways, depending on the characteristics of the algorithm, its potential applications, and the target market. Here are some examples that illustrate possible approaches to commercializing these technologies:
– software licensing
Trained algorithms can be made available to other companies or institutions in the form of a license. The IT industry has the option of licensing its software to medical institutions, such as hospitals, clinics, laboratories, and manufacturers of medical equipment. In the presented case, the company offering the algorithm can charge for access to the software, for example in a subscription model based on an annual license or based on fees for the number of users.
– medical applications
Algorithms can be integrated with dedicated medical applications that support the work of doctors, diagnosticians, and other professionals in the field of medicine. These can be applications designed for the analysis of medical images (e.g. X-rays, MRI), computer-aided diagnostics, or personalized monitoring of the condition of patients. Such applications can be sold directly to private clinics, hospitals, and doctors.
– clinical decision support systems
Algorithms can be implemented in clinical decision support systems (Clinical Decision Support Systems), which help doctors make decisions based on available medical data. It is possible to sell these systems as comprehensive solutions for medical facilities, where the system analyzes patient data, suggests potential diagnoses or therapies and monitors treatment results.
– partnerships with pharmaceutical and biotechnology companies
Algorithms can be used to develop new drugs, therapies and medical products , they can speed up the process of analyzing data from clinical trials, indicating potential therapies or biomarkers.
– cloud-based medical data
Algorithms can be integrated with cloud platforms that offer real-time medical data analytics. Doctors, hospitals, and clinics can subscribe to these services, accessing the results of analyses in the cloud. These models can analyze data from various sources (e.g., test results, medical images, genetic data) and provide fast and efficient insight into the health of patients.
– training and consulting
Companies that have developed algorithms can offer training services for medical staff to help integrate these technologies into their daily practice. In addition, they can provide consulting services on implementing the algorithm in medical systems and analyzing the results obtained using these tools.
– collecting and selling anonymized data
Once the algorithms have been trained on medical data, one way to commercialize them could be to offer access to datasets that are anonymized and aggregated. They can be used by other companies for research or development of new products. However, it is important to comply with data protection regulations, such as GDPR, to ensure that patient data is properly protected.
– cooperation with health insurers
Cooperation with insurance entities in the health sector opens up new possibilities for implementing algorithms that can significantly support the processes of risk assessment, patient health management and optimization of healthcare costs. These algorithms have the potential to support insurance companies in the analysis of patient health risks, optimization of treatment costs and disease prevention. Insurance companies can use algorithms to predict treatment costs, improve the quality of care and adapt insurance offers to individual needs, which in consequence can lead to the creation of new sources of income for companies operating in the field of health technologies.
Algorithmic bias
It is noticeable that AI-based programs, such as digital assistants, bots, popular translators can have algorithmic bias. The causes of this problem are diverse: they can result from using the wrong data sets, a lack of demographic diversity in teams working on new technologies, or simply from “human error.”
The causes of algorithmic bias can be summarized in a few points:
– not providing algorithms with knowledge about socially unacceptable attitudes,
– training on non-diverse data sets that reflect only the interests of a narrow group (such as the image recognition algorithm that does not include people of color),
– sharing historical data that can lead to erroneous conclusions.
All of these factors affect the quality of the data that a given algorithm receives, and therefore its subsequent performance.