KG LEGAL \ INFO
BLOG

Consequences of AI Washing in the light of consumer law and the provisions of the Artificial Intelligence Act (AI Act) – Polish and EU perspective

Publication date: January 06, 2026

AI washing as a market phenomenon

AI washing is a phenomenon of growing scale and market significance, whose name and general scope are directly analogous to the well-recognized greenwashing and the less-developed ethics washing. Despite the lack of a single, universally accepted definition, all attempts to define it converge on defining this practice as a marketing tactic. In this context, companies attribute advanced capabilities resulting from the implementation of artificial intelligence (AI) to their products, services, or internal processes, even though the actual level of its application is marginal or disproportionate to the claims made.

These behaviors are often the result of intense competitive pressure from other market players, consumer expectations following technological trends, and internal corporate pressure to achieve rapid profits and project an image as an innovation leader. AI has now become synonymous with progress and, often, profit. This is also due to the lack of a generally accepted definition of AI, although one has now been introduced as part of Regulation (EC) No 2024/1689 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139, (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (the Artificial Intelligence Act) (hereinafter referred to as the AI Act), which will be discussed later in this article. Nevertheless, it is worthwhile to define AI at this stage and how this concept will be understood for the purposes of the upcoming discussion.

An AI system is defined in Article 3(1) of the AI Act as a machine system that is designed to operate with varying levels of autonomy, has the ability to adapt, and can infer, for various purposes, how to generate predictions or other content based on received information that may impact the physical or virtual environment. Seven elements of this definition are essential, described in more detail in the guidelines issued by the European Commission (Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (Artificial Intelligence Act)). This is a very broad and inclusive definition, intentionally encompassing various variants of machine learning.

Given this broad definition, it is necessary to supplement it with Recital 12 of the AI Act, which specifies the conditions under which certain systems are excluded from classification as AI systems. Systems based solely on rules defined by natural persons will not be considered AI, nor will systems that are unable to function without human intervention. An important element of the definition of an AI system is its ability to learn itself, which allows for changes to be made during its use. Additional exceptions have been identified by the Commission in its guidelines. The definition remains very broad, however, and it is important to note that there are still many systems that will not be considered AI. According to the Commission, it is impossible to create an exhaustive list of AI systems, so determining a system as an AI system must be done on a case-by-case basis.

The introduction of an AI definition means that AI washing will be punishable, finally providing a single definition for what can be called an AI system. The effects of AI washing are multifaceted, from eroding consumer trust in a given company and the entire technology sector to distorting fair competition and discouraging investors. Some authors are using the new term “AI Booing” to describe public outrage over perceived AI failures. This term occurs when the hype surrounding AI fails to reflect the actual user experience. Furthermore, AI washing can also have legal consequences for companies, consumers, and regulators. This analysis aims to thoroughly consider these aspects. First, considerations will be presented regarding the potential liability a manufacturer or distributor may incur for AI washing practices, analogously to the liability for greenwashing towards consumers.

Next, an analysis will be carried out of the operator’s liability arising directly from the provisions of the AI Act, which constitutes a new regulatory framework for artificial intelligence in the European Union.

Responsibility for AI washing towards the consumer

Under EU regulations, and in particular under the AI Act, three categories of entities are potentially exposed to liability for potential damages or omissions resulting from the operation of AI systems: suppliers, importers, and distributors. These entities are collectively referred to as the operator within the meaning of Article 3(8) of the AI Act. However, it should be categorically emphasized that the AI Act itself does not establish detailed rules for liability for damages per se. However, the Act provides a framework for navigating issues related to AI and all parties involved in these systems. The lack of definition of liability rules for AI errors is due to the fact that the AI legislative package previously envisaged the adoption of the Proposal for a Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to artificial intelligence (Artificial Intelligence Liability Directive) COM(2022) 496 final 2022/0303 (referred to in this article as AILD). However, recent decisions indicate that due to the lack of consensus on its content, work on the directive will not be continued at this time.

Despite its failure to pass, it is essential to mention the AILD project in the context of considerations regarding liability for AI, as its assumptions will influence future legislative attempts or the interpretation of existing national laws. The directive regulated tortious liability (non-contraceptive liability) where damage was caused by an AI system, as defined by the AI Act. The directive did not independently define key concepts such as “damage” or “fault”, seeking to create a structure complementary to national fault-based liability systems. This was the main difference between AILD and Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29, as amended), which established provisions surrounding strict liability. AILD’s main proposal was to make it easier for plaintiffs to prove causation, so that it would be presumed until certain requirements outlined in the directive were met. However, it was not intended to reverse the burden of proof. The idea behind the directive was to make it easier for victims to obtain information about the system and what caused the harm. It was intended to ensure that those harmed by AI would be protected in the same way as those harmed by other technologies. It was also intended to prevent fragmentation of the AI liability regime within the European Union.

However, this directive has not entered into force, meaning that damage caused by AI will have to be regulated by existing law for the time being. Polish law allows for contractual and tort liability. The former is better prepared for the increasing presence of AI, as it is fully dependent on a written contract. This is also the system under which almost all AI-related conflicts are currently resolved. Tort liability is contained in the Act of 23 April 1964 – Civil Code (Journal of Laws of 2023, item 1610, as amended) and, of course, does not mention AI or even computer programs. The latter are typically regulated either through licensing agreements or EU directives protecting consumers. The problematic nature of AI, as a system exhibiting a degree of decision-making autonomy and self-learning capacity, means that tort law cannot rely on the simple assumption that a human factor always controls its functioning to a degree analogous to traditional computer programs. For this reason, academics are suggesting the use of provisions of the Civil Code concerning liability for animals (Article 431 of the Civil Code), businesses (Article 435 of the Civil Code), or vehicles propelled by natural forces (Article 436 of the Civil Code). However, at this stage, these are primarily theoretical considerations, as AI will often be treated simply as a defective product, and liability will be proven based on Title VI of the Civil Code, which concerns warranty or liability for damage caused by a dangerous product. This approach, while adequate for simple programs, may prove insufficient in the case of highly autonomous systems, where demonstrating a manufacturing defect will be a technical and evidentiary challenge.

Despite the complexity of tort liability, the regime regulating consumer protection against unfair market practices is crucial for the topic of AI washing. In this regard, the most helpful analogy is to greenwashing, which is typically punished as an unfair marketing practice (Article 3 of the Act of August 23, 2007 on Counteracting Unfair Market Practices (Journal of Laws of 2023, item 844, as amended)). In the case of both greenwashing and AI washing, we are dealing with misleading activity within the meaning of Article 5 of the aforementioned Act of August 23, 2007. Additionally, Article 7, paragraph 12 of the same Act may also apply in these situations – generally provisions concerning the presentation of information that is misleading or in some way inadequate to the nature of the product. In the context of AI washing, if the information provided about the degree, scope, and quality of AI system implementation in a given product does not reflect reality, it results in a false or exaggerated perception of the product in the minds of consumers. Information regarding the level of AI use is currently a key decision-making factor influencing the buyer’s intention to purchase. If it subsequently turns out that marketing claims about a product do not reflect the actual state of affairs, the advertiser may face severe legal consequences. Similar protective rules were also introduced in the Act of April 16, 1993, on Combating Unfair Competition (Journal of Laws of 2022, item 1233, as amended), regarding misleading consumers. It follows from the above that AI washing can easily fall into the category of prohibited practice and, in line with current interpretative trends, will be treated analogously to the already sanctioned greenwashing.

Consequences of AI washing under the AI Act

The AI Act was adopted over a year ago, but its full entry into force is more delayed. Many provisions have not yet been implemented, and without the Court of Justice of the European Union (CJEU) ruling, some of them will remain unclear for several years. Nevertheless, this is the world’s first attempt at regulating AI and will undoubtedly inspire other countries, making this regulation worth considering.

The main feature introduced by the AI Act is the division of AI systems into categories based on risk level. As a result, AI system categories were established: prohibited systems (unacceptable risk), as well as systems with high, medium, and low risk. The regulation was written at a time when large language models (LLMs) were not yet widespread, so it was not originally designed for systems capable of performing multiple functions. Therefore, provisions regarding general-purpose AI models (GPAIs) were only later added, creating a certain inconsistency in how systems are classified.

Article 5 of the AI Act regulates prohibited practices in AI. While important, due to the original intent of the regulation – namely, regulating systems based on their purpose – it is quite difficult to implement. Furthermore, it is not relevant to the current discussion. Chapter III of the AI Act is dedicated to high-risk systems, defined in Article 6, a classification based on the function of the given system. Furthermore, Article 6 is supplemented by Annex III, resulting in a broad definition that is easy to fall under. Crucial to this analysis is Article 3(63) of the AI Act, which defines a general-purpose AI model; such models will not fall under the regime of Chapter III, but under the provisions of Chapter V. It would therefore seem that by creating or using a general-purpose system, operators can avoid the more complex obligations imposed by the chapter on high-risk systems. It is not yet clear whether these classification methods overlap or constitute two separate ways of categorizing AI, and this will likely only be clarified by the CJEU.

The obligations imposed by the AI Act on operators of high-risk systems are diverse, yet very general. They mainly boil down to maintaining appropriate documentation, properly managing risk, ensuring the security of the system itself and the information it stores, and ensuring “human oversight” ofthe system. Although the list is long, it currently leaves considerable room for maneuver in terms of how these issues are understood. Low-risk systems do not have such obligations, while medium-risk systems must primarily inform users that they are dealing with an AI system. General-purpose systems must have appropriate documentation and a list of data used to train AI models. Furthermore, the classification of systems as general-purpose systems is influenced by whether the system is available for a fee or not. If it is a fee-based system, the regime is supplemented with additional obligations, such as maintaining up-to-date complaint documentation, system testing, and risk management. Therefore, classifying a system into the appropriate category is crucial for the proper functioning of the regulation.

AI washing distorts this classification, blurring the line between what a system actually does and how it is described. System classification is instinctively performed by the operator, but situations are conceivable where a manufacturer correctly classifies a system for the purposes of the regulation but advertises the product as something in which the AI system plays a more important role. The AI Act does not specify who performs system classification, believing that it couldn’t be that difficult. However, as AI washing becomes an increasingly common phenomenon, the perspective from which the system’s risk is assessed becomes crucial. This seems particularly important when considering the AILD, which, of course, did not enter into force. However, since both documents were created simultaneously, reading them together provides a true picture of how the EU hoped to comprehensively regulate AI. AILD was intended to create a system that would allow users and courts to access technical documentation for high-risk systems. However, if the classification is incorrect or simply inconsistent with what was presented to consumers, such a procedure would not be possible.

AI washing will therefore cause even greater confusion in the already complex system created by the AI Act. Additional ambiguity is caused by the lack of distinction between whose perspective will determine the system’s classification. It is possible that an operator, while classifying a system as lower risk, will then advertise it as falling under the Chapter III regime (high-risk system). This discrepancy is not regulated, and the procedure for the body responsible for enforcing the regulation’s rules in such situations is not clearly described. Therefore, especially in the first years of the regulation’s operation, additional clarifications can be expected to help navigate such situations.

High-risk systems are additionally subject to CE certification (Article 48 of the AI Act) and must be registered in the EU database before being placed on the market or put into service (Article 49 of the AI Act). This imposes an additional obligation on operators of high-risk systems that does not apply to systems with lower classifications. It is unclear whether general-purpose systems are subject to these rules. The discussion focused on the consequences of AI washing outlined above is therefore equally relevant to the decision on whether a system must be registered.

Article 50 of the AI Act also introduces a transparency obligation for certain systems. Systems subject to the transparency obligation do not correspond to the classification presented earlier but are based on the system’s intended purpose. The two main situations in which this obligation is imposed are when the system is intended to interact directly with individuals and when it is intended to generate images, sounds, or videos. From the perspective of AI washing, the former seems much more important. When a system is intended to interact with a human, it must be clearly communicated that the individual is interacting with such a system. Therefore, if an operator exaggerates the role of AI in their product, it will not be transparent and clear to the individual whether they are interacting with such a system or not. It might seem that this provision is primarily intended to ensure that people are aware that they are interacting with an AI system. However, such awareness will be equally important in the other direction – knowing that the interaction is not with the system but with another type of algorithm or a human. Therefore, AI washing will not only lead to distortions in the system’s correct classification but also to non-compliance with Article 50 of the AI Act.

Summary

AI washing will only gain popularity in the coming years, meaning more and more lawsuits will be filed against it. The above analysis suggests that, from a consumer perspective, such behavior will typically be subject to the same regulations as greenwashing. It will be interesting to see how situations in which AI predictions lead to greenwashing (so-called algorithmic greenwashing) are resolved. However, such behavior will likely fall more squarely under the ESG reporting regime.

In the case of AI, taking into account the EU AI Act is crucial. It will play a crucial role in the coming years within and beyond the European Union, providing inspiration for how to regulate, or precisely what to avoid when regulating, AI systems. AI washing disrupts this crucial element of the regulation. Furthermore, it makes it crucial to determine from whose perspective the classification should be made. If such a system is to interact directly with humans, there is also a problem related to the obligation of transparency. It is important not to forget that AI systems must also comply with other EU legislation protecting human rights, such as the GDPR.

Sources

Regulation 2024/1689 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

Commission Guidelines on the definition of an Artificial Intelligence System established by Regulation (EU) 2024/1689 (Artificial Intelligence Act)

Proposal for a Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual liability to artificial intelligence (AI Liability Directive) COM(2022) 496 final 2022/0303

Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29, as amended.

Act of 23 April 1964 – Civil Code (Journal of Laws of 2023, item 1610, as amended)

Act of 23 August 2007 on Counteracting Unfair Market Practices (Journal of Laws of 2023, item 844, as amended)

Act of 16 April 1993 on Combating Unfair Competition (Journal of Laws of 2022, item 1233, as amended)

European Commission, “WHITE PAPER: On Artificial Intelligence – A European approach to excellence and trust” (19 February 2020) COM(2020) 65 final

Tambiama Madiega, “Briefing EU Legislation Process: Artificial intelligence liability directive” (European Parliament, February 2023)

Selcen Ozturkcan and Ayse Aslı Bozdag, “Responsible AI in Marketing: AI Booing and AI Washing Cycle of AI Mistrust” [2025] International Journal of Market Research 2025 Vol. 67(6) 696–722

European Commission, “REPORT FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL AND THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE: Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” (19 February 2020) COM(2020) 64 final

European Commission, “Liability Rules for Artificial Intelligence” <https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en> accessed 1 December 2025

Memoona Tawfiq, “Algorithmic Greenwashing? Why AI-Backed Sustainability Claims Need Regulation” (CLG, 23 April 2025) <http://clgglobal.com/algorithmic-greenwashing-why-ai-backed-sustainability-claims-need-regulation/> accessed 3 December 2025

EU Artificial Intelligence Act, “High-level summary of the AI Act” (27 February 2024) <https://artificialintelligenceact.eu/high-level-summary/#weglot_switcher> accessed 8 December 202 5

UP