KG LEGAL \ INFO
BLOG

GDPR and AI Act New Challenges in Privacy Protection in the Era of Artificial Intelligence

Publication date: April 07, 2025

1. General introduction to the topic:

We are currently witnessing a true technological revolution related with artificial intelligence.

The AI system generates answers from its database on the basis of which it was “trained” or sometimes also on the basis of prompts that users enter into it; the larger the database of information encoded into it, the more accurate its answers are, because this technology can draw conclusions based on the databases it has.

However, this technology also creates new challenges that national and EU legislation must protect. Good examples of personal data theft using AI are:

  • Deepfake – AI-generated or AI-manipulated images, audio content, or video content that resembles existing people, objects, places, entities, or events that the recipient could wrongly believe to be authentic or real;
  • Phishing – a type of cyberattack in which someone or something pretends to be someone trustworthy and manipulates our emotions to obtain our sensitive data.
  • Jail Breakers – are specific commands and inputs designed to trick a deep learning algorithm into providing an answer that may violate its operating principles. This could result in, for example, the leaking of private information, the generation of prohibited content, or the creation of malicious code.

2. GDPR – Key Aspects of Privacy Protection

What is GDPR?:

GDPR is a set of regulations governing the use of personal data. These regulations constitute rules and limits that (mainly) entrepreneurs must adapt to in the process of data processing. Companies cannot therefore freely use other people’s data – they must comply with specific rules. GDPR is to be a general guarantee of the protection of our personal data against excessive use or use by unauthorized persons. This regulation establishes additional entities, such as: personal data protection inspectors, who are to ensure proper compliance with the regulations, and additional public administration bodies, such as the President of the Personal Data Protection Office

GDPR Rules:

  • The principle of fairness, lawfulness and transparency – this principle means that personal data should be processed in accordance with the law, the possibility of access to them by the person whose data it is, and should be informed about the method and purpose of processing. The data controller should be honest, and the interest of the person whose data is processed is paramount.
  • The principle of limiting the purpose of processing – data should be collected for specific, explicit and legally justified purposes, unless these are archival or scientific purposes
  • Principle of data minimization – data must be processed adequately, appropriately and limited to what is necessary for the purposes for which they are processed.
  • Principle of accuracy – updated where necessary; all reasonable steps must be taken to ensure that personal data that are incorrect in light of the purposes for which they are processed are immediately deleted or rectified.
  • Principle of storage limitation – stored in a form which permits identification of the data subject for no longer than is necessary for the purposes for which the data is processed; personal data may be stored for a longer period provided that they are processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes.
  • Principle of integrity and confidentiality – data should be processed in a way that ensures appropriate security of personal data, including protection against unauthorized or unlawful processing and accidental loss, destruction or damage, using appropriate technical or organizational measures.
  • Principle of accountability – the administrator is responsible for processing the above-mentioned principles, and therefore can be held accountable for it.

Every person has rights arising from the Regulation, including:

  • the right to data protection by organizations responsible for data processing,
  • the right to withdraw consent to data processing at any time,
  • the right to access the data of the person concerned, therefore such a person has the right to obtain any information about the processing of data,
  • the right to rectify data, the person whose data is being processed may request the administrator to immediately rectify their personal data that is incorrect,
  • the right to be forgotten, i.e. the right to request that the administrator delete data immediately after receiving the request,
  • the right to restrict processing where the data subject contests their accuracy, the processing of the data is illegal or unnecessary, or the data subject has filed an objection,
  • the right to data portability – the data subject has the right to keep his or her personal data in a structured, commonly used and readable format or to send them to another controller,
  • the right to object to the processing of personal data, in which case the administrator is not allowed to process this data unless he/she proves the existence of important legally justified grounds for processing.

3. AI Act – Artificial Intelligence Regulation

AI Act Goals and Scope: What are the goals of the AI Act (e.g., ensuring accountability and transparency of AI algorithms).

Protection of citizens’ fundamental rights and freedoms, such as the right to privacy, non-discrimination and equal treatment.

Increase transparency of AI systems and give users control over how their data is used.

Minimizing risks associated with the use of AI, such as algorithm errors, biases and abuse of power.

Supporting innovation and the development of safe and trustworthy AI.

Determining the liability of entities introducing the AI system to the market.

The responsibility is set out in the second article of this Regulation.

1. This Regulation shall apply to:

(a) providers placing AI systems or general purpose AI models on the market in the Union, regardless of whether those providers are established or located in the Union or in a third country;

(b) entities using AI systems that are established or located in the Union;

(c) providers of AI systems and entities using AI systems that are established or located in a third country where the results produced by the AI system are used in the Union;

d) importers and distributors of AI systems;

(e) product manufacturers who, under their own name or trademark and together with their product, place on the market or put into service an AI system;

(f) authorised representatives of suppliers not established in the Union;

(g) persons affected by AI who are located in the Union.

The breakdown of AI risks, especially in the context of privacy, is crucial to understanding the impact of these technologies on our lives and personal data. AI systems, especially those using personal data, can pose many privacy risks. Below, there is presented a breakdown of risks and a classification of high-risk algorithms.

4. AI and Privacy Risks

a. Collection and processing of personal data

AI algorithms often require large data sets that may include personal data (e.g. health, financial, biometric, location data). Processing this data can lead to privacy breaches, especially if appropriate data protection measures are not applied.

b. Profiling and monitoring

AI can be used to profile users, collect information about them, and predict behavior . This approach can violate privacy because individuals can be tracked without their consent or knowledge, and their data can be used in non-transparent and unethical ways.

c. Lack of transparency of algorithms

Many AI systems operate in ways that are difficult to understand and verify, which can lead to problems controlling what data is collected, how it is used, and who has access to it. This creates risks of data misuse.

d. Data security

Datasets used by AI systems can become targets for cyberattacks. Hackers may attempt to intercept or manipulate data, which can lead to a breach of the privacy of the data subjects.

e. Disinformation and manipulation

AI can be used to create false information (e.g. deepfakes) or manipulate content that can influence political, social, or economic decisions. Such actions can threaten not only privacy but also social stability.

5. Classification of high-risk algorithms

AI algorithms can be classified according to their potential impact on privacy, security, and other social aspects. The European Commission has proposed regulations that help classify such algorithms into risk levels.

a. High Risk Algorithms

AI algorithms that could have a serious impact on people’s privacy and lives are classified as “high risk.” Examples include:

  • Facial recognition and other biometric technologies can be used to track people in public spaces, raising concerns about surveillance and loss of anonymity.
  • AI in medicine – the use of AI to analyze health data requires special privacy protections, as misdiagnosis or misuse of health data can have serious consequences for the individual.
  • AI in employment – Algorithms that evaluate job candidates can lead to discrimination and unjustified decisions, especially if the data used to train them is incomplete or biased.
  • Artificial Intelligence in Law and Justice – AI-based systems can decide on the severity of sentences, which in the event of errors can have serious legal and personal consequences.

b. Low risk algorithms

These are AI systems that do not have such a strong impact on people’s lives and privacy. They are usually used in less sensitive areas, such as:

  • Recommendation algorithms in social media – although they influence user behavior, they are not directly related to a serious privacy risk.
  • Simple automated customer service systems – e.g. chatbots, which do not process sensitive data and are relatively safe, provided that appropriate security measures are maintained.

6. AI Privacy Policy

Due to growing privacy concerns, various regulations have been created to ensure the safe and lawful use of AI:

  • GDPR in the European Union – regulates the processing of personal data and imposes obligations on algorithm creators, such as ensuring transparency, limiting the purposes of processing, the right to be forgotten and ensuring an adequate level of data security.
  • Data minimization principle – AI should only process data that is absolutely necessary to achieve its purpose, which reduces the risk of privacy breaches.

Responsibilities of AI providers and users: What requirements are imposed on AI creators and users (e.g. audits, transparency, accountability).

7. Examples of AI Applications in the Context of Privacy         

  • Personal data breach alert system. The administrator is required to immediately notify the person whose personal data has been breached, but the AI system can do this even faster, or report an attempt to steal this data by including in the file the hacker’s IP address, the time of the cyberattack or what actions the unauthorized person took to obtain the data.
  • The AI system can also be used to encrypt data, in real time, to make it harder to steal data.
  • Access Control – artificial intelligence can examine user movement and behavior to decide whether to allow or limit access to personal data.
  • AI can assist in the automatic deletion of personal data in line with the principle of data minimization and the obligation to delete data when it is no longer necessary for the purpose for which it was collected. This may include automatically detecting which data should be deleted in accordance with data retention policies and regulations (e.g. GDPR).

8. The Future of AI Regulation and Privacy Protection

One of the key issues is the fact that artificial intelligence is improving and is constantly being improved by programmers. The very definition in Article 3 contains the premise of adaptation, so it can be said that AI is quite a malleable matter that may enter another phase of development in some time and the definition will have to be improved again.

The entry into force of the AI Act itself will result in harmonization in the trade of the AI system. It also provides for very high penalties, e.g. EUR 35,000,000 for violating any of the prohibitions in Article 5. Such penalties are intended to effectively deter violations of this regulation.

Another obligation will be to obtain a CE certificate (i.e. a declaration of conformity with EU standards), which means it must be classified, and based on the risk it may pose, the authority must also assess it in terms of safety. The system must also be transparent and comply with the GDPR and its operation must be monitored by the supplier. In addition, the supplier is obliged to maintain technical documentation and risk assessment data.

Sources:

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance (OJ EU L 144, 2024, item 1689).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ EU L. of 2016, No. 119, p. 1, as amended).

UP