Publication date: April 07, 2025
1. General introduction to the topic:
We are currently witnessing a true technological revolution related with artificial intelligence.
The AI system generates answers from its database on the basis of which it was “trained” or sometimes also on the basis of prompts that users enter into it; the larger the database of information encoded into it, the more accurate its answers are, because this technology can draw conclusions based on the databases it has.
However, this technology also creates new challenges that national and EU legislation must protect. Good examples of personal data theft using AI are:
2. GDPR – Key Aspects of Privacy Protection
What is GDPR?:
GDPR is a set of regulations governing the use of personal data. These regulations constitute rules and limits that (mainly) entrepreneurs must adapt to in the process of data processing. Companies cannot therefore freely use other people’s data – they must comply with specific rules. GDPR is to be a general guarantee of the protection of our personal data against excessive use or use by unauthorized persons. This regulation establishes additional entities, such as: personal data protection inspectors, who are to ensure proper compliance with the regulations, and additional public administration bodies, such as the President of the Personal Data Protection Office
GDPR Rules:
Every person has rights arising from the Regulation, including:
3. AI Act – Artificial Intelligence Regulation
AI Act Goals and Scope: What are the goals of the AI Act (e.g., ensuring accountability and transparency of AI algorithms).
Protection of citizens’ fundamental rights and freedoms, such as the right to privacy, non-discrimination and equal treatment.
Increase transparency of AI systems and give users control over how their data is used.
Minimizing risks associated with the use of AI, such as algorithm errors, biases and abuse of power.
Supporting innovation and the development of safe and trustworthy AI.
Determining the liability of entities introducing the AI system to the market.
The responsibility is set out in the second article of this Regulation.
1. This Regulation shall apply to:
(a) providers placing AI systems or general purpose AI models on the market in the Union, regardless of whether those providers are established or located in the Union or in a third country;
(b) entities using AI systems that are established or located in the Union;
(c) providers of AI systems and entities using AI systems that are established or located in a third country where the results produced by the AI system are used in the Union;
d) importers and distributors of AI systems;
(e) product manufacturers who, under their own name or trademark and together with their product, place on the market or put into service an AI system;
(f) authorised representatives of suppliers not established in the Union;
(g) persons affected by AI who are located in the Union.
The breakdown of AI risks, especially in the context of privacy, is crucial to understanding the impact of these technologies on our lives and personal data. AI systems, especially those using personal data, can pose many privacy risks. Below, there is presented a breakdown of risks and a classification of high-risk algorithms.
4. AI and Privacy Risks
a. Collection and processing of personal data
AI algorithms often require large data sets that may include personal data (e.g. health, financial, biometric, location data). Processing this data can lead to privacy breaches, especially if appropriate data protection measures are not applied.
b. Profiling and monitoring
AI can be used to profile users, collect information about them, and predict behavior . This approach can violate privacy because individuals can be tracked without their consent or knowledge, and their data can be used in non-transparent and unethical ways.
c. Lack of transparency of algorithms
Many AI systems operate in ways that are difficult to understand and verify, which can lead to problems controlling what data is collected, how it is used, and who has access to it. This creates risks of data misuse.
d. Data security
Datasets used by AI systems can become targets for cyberattacks. Hackers may attempt to intercept or manipulate data, which can lead to a breach of the privacy of the data subjects.
e. Disinformation and manipulation
AI can be used to create false information (e.g. deepfakes) or manipulate content that can influence political, social, or economic decisions. Such actions can threaten not only privacy but also social stability.
5. Classification of high-risk algorithms
AI algorithms can be classified according to their potential impact on privacy, security, and other social aspects. The European Commission has proposed regulations that help classify such algorithms into risk levels.
a. High Risk Algorithms
AI algorithms that could have a serious impact on people’s privacy and lives are classified as “high risk.” Examples include:
b. Low risk algorithms
These are AI systems that do not have such a strong impact on people’s lives and privacy. They are usually used in less sensitive areas, such as:
6. AI Privacy Policy
Due to growing privacy concerns, various regulations have been created to ensure the safe and lawful use of AI:
Responsibilities of AI providers and users: What requirements are imposed on AI creators and users (e.g. audits, transparency, accountability).
7. Examples of AI Applications in the Context of Privacy
8. The Future of AI Regulation and Privacy Protection
One of the key issues is the fact that artificial intelligence is improving and is constantly being improved by programmers. The very definition in Article 3 contains the premise of adaptation, so it can be said that AI is quite a malleable matter that may enter another phase of development in some time and the definition will have to be improved again.
The entry into force of the AI Act itself will result in harmonization in the trade of the AI system. It also provides for very high penalties, e.g. EUR 35,000,000 for violating any of the prohibitions in Article 5. Such penalties are intended to effectively deter violations of this regulation.
Another obligation will be to obtain a CE certificate (i.e. a declaration of conformity with EU standards), which means it must be classified, and based on the risk it may pose, the authority must also assess it in terms of safety. The system must also be transparent and comply with the GDPR and its operation must be monitored by the supplier. In addition, the supplier is obliged to maintain technical documentation and risk assessment data.
Sources:
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance (OJ EU L 144, 2024, item 1689).
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ EU L. of 2016, No. 119, p. 1, as amended).