KG LEGAL \ INFO
BLOG

Human-centric and trustworthy artificial intelligence (AI)

Publication date: May 23, 2025

Lead: The AI Act is the legal core for regulating the rules of trade and use of AI technologies. This key document, which can be classified in the substantive law segment, should be read primarily as a list of “prohibitions” for full economic freedom for a noble purpose: namely, to maintain the right balance between market protection and Europe’s competitiveness vis-à-vis other global markets.

The legal concept of “human-centric and trustworthy artificial intelligence” does not find a legal definition in Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008 (AI ACT). However, the wording itself appears multiple times in the preamble to the regulation and in Article 1, making it possible to analyse the meaning of this legal concept.

The phrase “human-centric” refers to the fact that AI is centered on humans. AI is impacting human life in many sectors, such as healthcare, education, transportation, finance, industry, agriculture, and entertainment, transforming the way we work, learn, and communicate. In many of these areas, humans are exposed to the dangers of privacy, disinformation, algorithmic bias, technology addiction, and the risk of losing jobs to automation. As a result, EU lawmakers have decided that AI should be human-centric to overcome the problems that arise from AI doubts.

As a result of the very rapid technological development, many people remain distrustful of AI. Among other things, a significant number of people do not understand how this technology works – it is complex and opaque to them. People also fear losing control over the operation of AI, and may also fear surveillance or misuse in connection with operating on the basis of huge amounts of data (including personal data). For this reason, one of the main goals adopted by the European Parliament is to build “Trustworthy AI”. The European AI Strategy clearly states that trust is a prerequisite for ensuring a human- centric approach to AI. The approach to AI is based on a reputation for safe and high-quality products, in order to strengthen citizens’ trust in digital progress.

The legal concept of “human-centric and trustworthy AI” is therefore an objective that the European Union intends to achieve in order to ensure a high level of protection, health, safety, fundamental rights in the Charter of Fundamental Rights of the European Union. Among these rights are democracy, the rule of law and the protection of the environment, against the harmful effects of AI systems in the Union. In connection with this provision, the AI ACT precisely regulates matters in which artificial intelligence affects humans. The Regulation establishes rules on the placing on the market, putting into service and use of AI systems in the Union, and also sets out prohibitions on certain practices. In order to ensure the highest possible protection for categories of particularly vulnerable persons, it introduces, among other things, requirements for high-risk AI systems and obligations incumbent on operators of such systems.

The human-centric nature of AI technology development means that it should serve as a tool for people, with the aim of increasing human well-being. In this respect, the AI Act introduces a clearly defined approach based on the analysis of the risks that AI systems may cause. It therefore introduces prohibitions on the use of certain unacceptable practices in the field of AI, establishes requirements for high-risk AI systems and obligations for relevant operators, and establishes transparency obligations for certain AI systems. The Regulation distinguishes four classifications of AI systems based on the assessment of the risk of functioning. The first category is AI systems whose use is prohibited due to an unacceptable level of interference with protected values (Article 5). The next category is high-risk AI systems, which are permitted under the pain of meeting numerous conditions specified in Article 8 of the Regulation. The third category is AI systems that generate limited risk, and the last – systems that either do not generate any risk or it is maintained at a minimal level. However, this is not a complete classification, as the Act also introduces “general purpose AI models”, which are models capable of performing various tasks, either as stand-alone general purpose systems or through their integration with other systems or applications. This distinction between systems was introduced in order to ensure the safety of users and respect their rights.

Article 5 of the AI Act is important for explaining the concept of “human-centric AI.” It prohibits the use of certain AI practices that may threaten the rights and freedoms of natural persons. Among other things, it prohibits:

  • the use of subliminal techniques outside the individual’s awareness or deliberate manipulative or misleading techniques;
  • exploiting the weakness of a natural person or a specific group of people due to their age, disability or particular social or economic situation;
  • social scoring that is unfair or leads to unfavourable treatment;
  • carrying out risk assessments in relation to individuals in order to assess or predict the risk of an individual committing a criminal offence, solely based on profiling the individual or assessing their personality traits and characteristics;
  • creating or expanding a database for facial recognition through untargeted scraping of facial images from the Internet or CCTV recordings;
  • drawing conclusions about an individual’s emotions in the workplace or educational institutions;
  • biometric categorization, which individually categorizes individuals based on their biometric data;
  • use of real-time remote biometric identification systems in public spaces for the purposes of law enforcement.

These prohibitions help to concretize the idea of “human-centric AI” by establishing prohibitions that protect fundamental rights. In this way, Article 5 of the AI Act serves as the ethical pillar of the European approach to AI.

Interestingly, Article 9 of the Act imposes the obligation to implement systematic risk management for AI systems classified as high risk. It also states that the risk management system is understood as a continuous, iterative process, planned and implemented throughout the life cycle of a high risk AI system, requiring regular systematic review and updating. Considering the full life cycle, also after implementation, is consistent with the principle of continuous concern for the well-being of humans, rather than one-off compliance with regulations. It establishes specific requirements that are intended to protect the individual, promote ethical design of AI and ensure its responsible use. Thanks to this, the law does not only declare the protection of humans, but also implements it through technical and organizational obligations.

It is also worth paying attention to Article 22 of the General Data Protection Regulation (GDPR). It deals with automated decision-making in individual cases. The data subject has the right not to be subject to a decision that is based solely on automated processing, including profiling, and produces legal effects for that person or significantly affects them in a similar manner. “Profiling” means any form of automated processing of personal data that consists in the use of personal data to evaluate certain personal aspects of a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. This provision shows the protection of individuals against artificial intelligence, in order to ensure maximum human focus.

Recital 27 of the AI Act refers to the 2019 Ethics Guidelines for Trustworthy AI. They were developed by an independent high-level group of experts on AI established by the Commission to maximise the benefits of AI systems while preventing and minimising the risks associated with them. In these guidelines, the experts draw attention to the fact that trust refers not only to the technology itself, but also to the properties of socio-technical systems. According to the experts, creating trustworthy AI requires not only reliable systems, but also a holistic approach that takes into account the trustworthiness of all the people and processes involved in its operation at every stage. Therefore, trustworthy AI should have three characteristics – lawful, ethical, robust. Each of these three characteristics is necessary, but not sufficient in itself to achieve trustworthy AI.

However, it is worth remembering that different situations can create different challenges. The application of AI in different sectors does not pose the same challenges. AI systems are highly context-dependent, which is why the implementation process of the guidelines must be tailored to the needs of the specific AI application.

What are the characteristics of trustworthy AI? Well, lawful AI cannot function in a lawless world. There are many legally binding regulations regarding the development, implementation, and use of AI systems. The main sources of law include:

  1. primary EU law (treaties and the EU Charter of Fundamental Rights);
  2. EU secondary law (e.g. General Data Protection Regulation, Anti-Discrimination Directives, Regulation on the Free Flow of Non-Personal Data);
  3. UN human rights conventions;
  4. Council of Europe conventions;
  5. statutory provisions of the EU Member States.

However, ethical guidelines should not be read or interpreted as legal advice or a way to achieve legal compliance. However, it should be remembered that it is the responsibility of operators providing AI systems to ensure that the technology is legal in order to be trustworthy. In terms of ethics, laws do not always keep up with technological progress. This means that sometimes they may not be in line with ethical standards or may simply not be the right instrument to solve certain problems. Nevertheless, for AI systems to be trustworthy, they should also be ethical by ensuring compliance with ethical standards. The most important feature of trustworthy AI is its robustness. Individuals and society must also have confidence that AI systems will not cause any unintended harm. Ensuring the robustness of AI systems is necessary for both technological and societal reasons. The issues of ethical and robust AI are closely interconnected and complement each other.

The issue of a Trustworthy AI framework based on these guidelines is divided into three layers of abstraction. These are:

  1. Fundamentals of Trustworthy AI;
  2. Creating Trustworthy AI;
  3. Trustworthy AI Assessment.

Fundamental rights, which constitute a moral and legal entitlement, must be at the heart of Trustworthy AI. Respect for these rights, within the framework of democracy and the rule of law, provides the most promising foundation for defining abstract ethical principles and values that AI systems should adhere to. The EU Charter lists dignity, freedom, equality and solidarity as fundamental rights, among others. The central basis that unites these rights is respect for human dignity – an approach that reflects a human-centred approach in which humans enjoy a unique and inalienable moral status of primacy in the civil, political, economic and social dimensions.

Building trustworthy AI relies on specific requirements. These requirements apply to the various stakeholders involved in the lifecycle of AI systems: developers, implementers, and end users, as well as the broader society. Furthermore, in these guidelines, the High-Level Expert Group on AI has developed seven non-binding ethical principles for AI that are intended to help ensure that AI is trustworthy and adheres to ethical standards. These seven principles, otherwise known as the Trustworthy AI Assessment Checklist, are:

  1. Human Guidance and Oversight – Helps ensure that the AI system does not undermine human autonomy or cause other negative impacts;
  2. Technical robustness and security – AI systems must be resistant to both overt attacks and more subtle attempts to manipulate data or algorithms;
  3. Privacy and data management – ensure full control over your own data;
  4. Transparency – AI systems must be traceable;
  5. Diversity, non-discrimination and equity – AI systems should take into account the full range of human abilities, skills and requirements and ensure accessibility through a universal design approach, striving for equal access for people with disabilities;
  6. Social and environmental well-being – the sustainability and ecological responsibility of AI systems should be supported;
  7. Accountability – external control must be ensured for uses that impact fundamental rights, including safety-critical uses.

The list of these requirements is, however, not exhaustive.

Finally, the Trustworthy AI assessment is based on an assessment checklist for implementing Trustworthy AI in practice. This checklist is particularly applicable to AI systems that directly interact with users and is intended primarily for AI system builders and implementers (whether these systems are developed in-house or acquired from third parties).

The concept of “human-centric and trustworthy AI” is derived from the provisions of the AI Act Regulation and related legal acts and ethical documents. The focus of this approach is on humans – their rights, dignity and safety – which translates into the need to design and use AI systems in a way that enhances the well-being of individuals and society.

Trust in AI, as a fundamental pillar of its social acceptance, is built by ensuring compliance with the law, ethical principles and technological soundness. Regulations such as Article 5 of the AI Act or Article 22 of the GDPR, as well as the 2019 Ethical Guidelines, provide specific guidelines and frameworks that aim to protect users and minimize risks. A risk-based approach and categorization of AI systems by threat level also play a key role, which allows for adapting legal requirements to the specifics of a given solution.

In summary, human-centric and trustworthy AI is not just an ideological postulate, but a specific regulatory and ethical goal, the implementation of which requires close cooperation between legislators, technology creators, users and society as a whole. Only in this way is it possible to develop AI that not only increases efficiency and innovation, but above all serves people, their rights and values.

UP