Publication date: May 13, 2026
Can an AI influencer be held legally accountable? Not directly — but the businesses and creators behind them certainly can.
As AI-generated personas become a powerful tool in marketing, they also raise important legal questions around transparency, advertising disclosures, GDPR compliance, copyright, and the new obligations introduced by the EU AI Act. In my latest article, I explore who bears responsibility for AI influencers, what regulatory requirements apply, and which legal risks companies should address before launching virtual brand ambassadors.
If your business is using AI to engage consumers, this is a topic you cannot afford to ignore.
The topic of AI in the legal community remains fraught with controversy, loopholes, and regulatory needs. The dynamic progress in this area is linked to the emergence of ever- new phenomena—with which the law cannot always keep pace—one of which are so-called AI influencers.
AI influencers are, in other words, figures created within algorithms using artificial intelligence. It may seem that their growing popularity is closely linked to economic and social aspects indicating the tendency to depart from traditional forms of marketing in favor of those generated by artificial intelligence.
This type of solution, however, is not outside the scope of applicable law. AI influencers, like other creators, are subject to requirements regarding liability, transparency, personal data protection, and compliance with EU and national regulations. However, to what extent, who is responsible for them, and what regulatory obligations apply to them are matters of consideration for this study.
AI Influencer – a natural person, a legal person, or maybe something completely different?
Under current regulations, an AI influencer lacks legal personality or the capacity to bear liability. They should be treated as a tool or technological product used by an entrepreneur. Consequently, all responsibility for content generated using them rests with the entity that decides on its publication, i.e., most often the entrepreneur or, alternatively, the entity implementing the system. The prevailing view is that even in situations of a high degree of AI system autonomy, liability is not transferred to the system, as it is not a legal entity.
Can an entrepreneur avoid liability by relying on the autonomy of an AI system?
In the context of civil law, it should be assumed that general principles of tort liability apply. According to Article 415 of the Civil Code, liability rests with the person who caused the damage through their own fault, while Article 355 § 2 of the Civil Code imposes on the entrepreneur an obligation to exercise due diligence, taking into account the professional nature of the activity. In practice, this means that the entrepreneur cannot rely on the autonomy of the AI system as a circumstance excluding liability if they failed to ensure adequate control over the generated content.
AI Influencer and product promotion.
influencer as commercial information is also crucial. Any message intended to promote a product or service is subject to the rigors of, among others, the Consumer Rights Act, particularly regarding disclosure obligations in distance contracts.
Marking advertising content – not only the responsibility of AI influencers
In light of the regulations on combating unfair commercial practices, it should be assumed that the lack of appropriate labeling of advertising content poses a significant legal risk. The Act implementing Directive 2005/29/EC introduces a ban on misleading practices and a so-called “blacklist” of practices prohibited under all circumstances, including surreptitious advertising. Pursuant to Article 7, Section 11 of this Act, it is prohibited to use editorial content to promote a product without clearly disclosing the commercial nature of the message.
In the decision-making practice of the President of the Office of Competition and Consumer Protection, it should be noted that this authority consistently considers the lack of advertising collaboration labeling to be a violation of the collective interests of consumers. Decisions issued emphasize that labeling must be unambiguous, legible, and understandable to the average consumer, and that the use of unclear labeling or concealment may result in the imposition of significant financial penalties. In light of these decisions, it should be assumed that similar standards will apply to content generated by AI influencers.
Human, robot, or artificial creation – that is, misleading action.
AI influencer as a real person is also particularly significant. It should be assumed that failing to disclose the artificial nature of such a persona may be considered misleading, particularly regarding the authenticity of consumer experiences. Such a practice may be classified as a violation of both unfair market practices regulations and transparency principles under EU law.
In this context the application of AI Act is crucial. It should be noted that this act introduces a taxonomy based on the classification of AI systems according to risk level: prohibited systems, high-risk systems, systems subject to transparency obligations, and systems with minimal risk. In the case of AI influencers, it should be assumed that, in principle, they will be classified as systems subject to transparency obligations, unless their functionality includes elements that could significantly influence consumer decisions in a manipulative manner, which could potentially lead to a more restrictive classification.
Supplier and implementing entity – what does this mean for the entrepreneur?
The AI Act clearly distinguishes the roles of the AI system provider and the implementer. The provider is responsible for designing and marketing the system and must meet a number of requirements, including documentation, risk management, and compliance. The implementer, in turn, i.e., the entrepreneur using the AI influencer , is responsible for its use, particularly for compliance with transparency obligations and for the legal consequences of published content.
Article 50 of the AI Act and the information obligation
In light of Article 50 of the AI Act, end users must be informed that they are interacting with content generated by artificial intelligence. Furthermore, manipulative practices, including the use of subliminal techniques or exploitation of particularly vulnerable groups, are prohibited. Violation of the regulation’s provisions may result in the imposition of severe administrative penalties, reaching up to tens of millions of euros or a specified percentage of a company’s annual turnover.
Personal data protection
In the area of personal data protection, the GDPR applies. In particular, attention should be paid to the obligation to conduct a data protection impact assessment (DPIA) pursuant to Article 35 of the GDPR where processing – particularly involving the use of AI – is likely to result in a high risk to the rights and freedoms of natural persons. This applies in particular to behavioral profiling and automated decision-making.
The principle of privacy by design & privacy by default in practice
The principles of privacy by design and privacy by default require the controller to consider data protection at the system design stage and to use default settings that minimize the scope of data processing. In practice, this means that AI influencer mechanisms must be appropriately designed to limit interference with user privacy.
Tailoring content to users and GDPR
Content customization, or profiling, in accordance with Article 4(4) of the GDPR, is a particularly important element of AI influencer marketing activities. It should be assumed that the use of data regarding user preferences, behavior , or location to personalize messages requires meeting certain legal requirements, often including obtaining explicit consent. In the context of Article 22 of the GDPR, consideration should also be given to whether automated decision-making is taking place that produces legal effects or significantly affects a natural person in a similar manner.
CJEU case law and the use of cookies
In this regard, the judgment of the Court of Justice of the European Union in Case C-673/17 (Planet49) is particularly significant. It should be assumed that consent to the use of cookies must be expressed actively and knowingly, and the use of pre-selected consent is inadmissible. This ruling is fundamental to marketing practice, including activities carried out using AI influencers , as it confirms the need to obtain real, prior user consent for tracking and profiling activities.
From an ethical perspective, however, considered in the context of legal norms, it is necessary to point out the growing importance of the so-called dark web issue. patterns and consumer manipulation. In light of applicable regulations, it should be assumed that designing interfaces or messages in a way that induces users to make decisions they otherwise would not have made may be classified as unfair market practices. In the case of AI influencers , this risk is particularly significant due to the ability to precisely tailor the message to the recipient’s psychological profile.
The problem of AI and copyright
In the field of intellectual property law, it should be assumed that protection is granted only to manifestations of human creative activity. Content generated solely by AI generally does not constitute a work under copyright law, unless the human exercised sufficiently significant control over the creative process to constitute individual creative input. In practice, this leads to the search for alternative forms of protection, such as trade secrets.
At the same time, the risk of violating third-party rights, including copyright and image rights, must be considered. Generating content that resembles existing works or real people may result in liability for damages under the terms of civil law and the Copyright and Related Rights Act.
NIS2 and risk management obligations
Additionally, the obligations arising from the NIS2 Directive should be noted, which focus on risk management, supply chain security, and ensuring system continuity. In the context of AI influencers, this means implementing appropriate security procedures, monitoring vulnerabilities, and developing incident response mechanisms. This directive places the burden of responsibility on the entrepreneur generating content via an AI influencer.
The use of AI influencers in business requires compliance with various legal regulations, including consumer law, personal data protection, artificial intelligence regulations, and civil law. Current mechanisms are applicable to AI influencer activities , but in an era of constantly evolving technologies and the ever-increasing importance of social marketing, it can be expected that AI influencer activities will also require new solutions.
Regulation (EU) 2024/1689 of the European Parliament and of the Council (AI Act )
Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR)
Directive 2005/29/EC on unfair commercial practices
NIS2 Directive (Directive (EU) 2022/2555)
Act of 23 August 2007 on Counteracting Unfair Market Practices
Act of 16 April 1993 on Combating Unfair Competition
Act of 30 May 2014 on consumer rights
Act of 4 February 1994 on copyright and related rights
Civil Code
Planet49 judgment (C ‑ 673/17) of the Court of Justice of the European Union – regarding consent to cookies and direct marketing
Guidelines for labeling advertising content in social media issued by the Office of Competition and Consumer Protection
Decisions of the President of the Office of Competition and Consumer Protection regarding influencer marketing and surreptitious advertising (the body’s judicial practice, including cases concerning the lack of marking of commercial cooperation)
European Data Protection Board (EDPB) – Guidelines on Profiling and Automated Decision-Making
#AI #ArtificialIntelligence #AIAct #GDPR #InfluencerMarketing #LegalTech #Compliance #ConsumerProtection #Copyright #DataProtection #DigitalMarketing #TechnologyLaw #RiskManagement #NIS2 #EUlaw
Can an AI influencer be held legally accountable? Not directly — but the businesses and creators behind them certainly can.
As AI-generated personas become a powerful tool in marketing, they also raise important legal questions around transparency, advertising disclosures, GDPR compliance, copyright, and the new obligations introduced by the EU AI Act.
In my latest article, I explore:
• who bears responsibility for AI influencers
• what regulatory requirements apply
• the legal risks companies should address before launching virtual brand ambassadors
AI influencers may not have legal personality, but entrepreneurs using them remain fully responsible for the content they generate and publish.
The article also discusses:
✔ AI Act transparency obligations
✔ misleading commercial practices and hidden advertising
✔ GDPR and profiling risks
✔ copyright and image rights issues
✔ consumer protection requirements
✔ NIS2 and cybersecurity obligations
✔ ethical concerns related to manipulation and dark patterns
As AI-driven marketing evolves faster than regulation, businesses must ensure that innovation goes hand in hand with legal compliance and consumer trust.
If your company is already using AI to engage consumers — or plans to — this is a topic you cannot afford to ignore.
#AI #ArtificialIntelligence #AIAct #GDPR #LegalTech #Compliance #InfluencerMarketing #ConsumerProtection #Copyright #DataProtection #DigitalMarketing #TechnologyLaw #RiskManagement #NIS2 #EULaw #AIGovernance #AICompliance #CyberSecurity #Privacy #AdvertisingLaw #ContentCreators #VirtualInfluencers #EmergingTech #BusinessLaw #Innovation