Publication date: September 16, 2025
On September 11th, we had the pleasure of participating in an event organized by LifeScience Cluster on the application of artificial intelligence in healthcare. The webinar began with an explanation of the very concept of AI, which is controversial and riddled with myths. A key element of AI is machine learning.
It involves creating predictions based on previous data. Within machine learning, deep learning, a model based on neural networks, was developed. The difference between traditional machine learning and deep learning lies primarily in the number of neural network layers. These enable the recognition of increasingly complex relationships. The concept of data science, which lies between artificial intelligence and data analysis, was also mentioned. It is the art of combining data with practice. This allows for the automation of many processes and better decision-making.
https://lifescience.pl/wydarzenie-klastra/sztuczna-inteligencja-w-medycynie-praktyczne-zastosowania-i-przyszlosc-opieki-zdrowotnej/
The discussion focused on generative AI – a class of AI systems that can create new content, data, or results that are not specifically programmed into them. These systems use techniques to generate realistic and contextually relevant information, often mimicking human creativity in tasks such as creating images, generating text, or even composing music.
An interesting question has been posed: “Will we ever run out of data?” While this seems impossible, it is a real problem. Using synthetic data could be a solution, but it is not only complicated but also often inefficient.
Modern healthcare utilizes NLP—a field of artificial intelligence that enables computers to understand and analyze human language. This technology allows programs to interpret text, recognize intentions, and conduct conversations with users. This technology has the potential to create assistants not only for patients, who could consult with them, especially for conditions with specific and recurring symptoms, but also for doctors, thus streamlining healthcare operations. This groundbreaking solution may soon become the answer to today’s problems of an overloaded system.
Artificial intelligence is also used to analyze test results and monitor patient health, thanks to the potential of not only computing power but also modern devices. An example presented during the webinar was devices used to treat neurodegenerative diseases. AI also participates in the development of new drugs – by analyzing a vast number of scientific articles, it checks the occurrence of specific chemical compounds in relation to their uses, then creates mind maps of how a given compound might affect patients based on empirical research.
Undeniably, AI is making increasingly bold inroads into the world of medicine, offering new diagnostic, therapeutic, and organizational capabilities. Machine learning algorithms can analyze medical images with accuracy comparable to, and sometimes even greater than, experienced specialists. In an age of aging populations and overburdened healthcare systems, AI could become a tool that reduces the workload of physicians and accelerates the diagnosis process.
On the other hand, however, many questions arise regarding the legal implications of using such technologies. The Polish legal system, like the laws of many other countries, has not kept pace with the development of artificial intelligence. As a result, a dangerous regulatory gap has emerged, where patient safety and liability for potential errors remain unclear.
Traditionally, liability for medical errors rests with the physician or healthcare facility. Physicians are held professionally, civilly, and in extreme cases, criminally liable. However, the situation becomes more complicated when an artificial intelligence system enters the treatment process. If an algorithm misinterprets an X-ray or CT scan, and the physician bases their decision on this analysis, the question arises: who is actually at fault? Is it the physician who trusted the technology, the software manufacturer who created the flawed system, or perhaps the institution itself that decided to implement it?
An even greater problem is that artificial intelligence is not a static tool. Unlike traditional medical devices, which retain their properties once approved for use, algorithms often learn continuously, modifying their performance based on incoming data. This creates enormous difficulties in the certification and oversight processes. How can a system be verified if it functions completely differently after a month of operation than it did when it was first approved for use?
The discussion about AI in medicine cannot ignore the issue of patient rights. Under data protection regulations in force in Poland and the European Union, health information is considered particularly sensitive. However, to be effective, AI requires vast data sets that allow for learning and algorithm improvement.
The biggest problem, however, lies in the liability of artificial intelligence itself. In the Polish legal system, AI does not and cannot currently have legal personality. It is treated merely as a tool, similar to a stethoscope or ultrasound machine. Meanwhile, its actions increasingly go beyond simple physician support and take the form of autonomous decisions. In practice, this creates a certain “liability vacuum”. Affected patients may have great difficulty determining who to direct their claims to – the doctor, the facility, or perhaps the system manufacturer.
On June 13, 2024, a Regulation of the European Parliament and of the Council, known as the AI Act, was issued, aimed at regulating issues related to the use of artificial intelligence, including in the area of healthcare. The regulation introduces a number of key requirements for AI-based systems. One of these is the obligation to implement a quality management system to ensure that the technology operates in accordance with the highest standards of safety and reliability. The Artificial Intelligence Act adopts a risk-based approach: the higher the risk, the more stringent the regulations. Another crucial element is the requirement for human oversight – AI systems cannot operate fully autonomously but must remain under the control of specialists who can evaluate and verify their decisions.
The AI Act also imposes transparency and disclosure requirements regarding the use of artificial intelligence. This requires the implementation of appropriate technical and organizational measures to ensure that the system operates as intended and that users understand when they are interacting with AI technology.
The regulation also mandates the maintenance of detailed technical documentation regarding AI systems and the reporting of any incidents related to their operation. This will enable ongoing monitoring of the safety and effectiveness of these solutions, which is particularly important in the medical sector, where errors can lead to serious health consequences.
However, the Artificial Intelligence Act does not focus on the individual rights of people affected by AI – such as patients. For example, the right of patients to object to a healthcare professional using an AI system for diagnosis is not included in the regulation. However, Article 85 introduces the right for any natural or legal person—including patients—to file a complaint with a market surveillance authority if they suspect a violation of the AI Act. Furthermore, Article 86 introduces the right to an explanation of the individual decision-making process, which entails the right to obtain from the entity using it a clear and substantive explanation of the role of this AI system in the decision-making procedure and the main elements of the decision taken.
While the regulation does not solve all problems, it represents a significant step toward streamlining the use of AI and increasing patient safety. Will it prove sufficient to resolve the issue of AI liability in medicine? Only the future and practice will tell, once the act is fully implemented.