Author

Robert-Guillaume Chin A Sen

Who owns your face, your voice, and your words in the age of synthetic personas, aka AI clones?

The rise of AI clones (realistic-looking and sounding AI representations of real people) marks a shift in how identity is legally understood and exploited. While the law has long been structured around physical manifestations of images, such as photos and videos (1) , we are now confronted with interactive AI clones that are almost indistinguishable from the real person.

This raises several questions. For example: what rights do we (still) have as human beings if AI clones have been created of us? How do existing legal areas relate to an AI clone? Is an AI clone considered personal data? What is the relationship with portrait rights? Can an AI clone be a copyrighted work?
In this blog post, we'll discuss several key considerations regarding the use of AI clones, from portrait rights to liability, the employer-employee relationship, and ownership and copyright.

Point of attention 1: Portrait rights and personal data

Portrait rights (Articles 19-21 of the Copyright Act) function as an open standard. The legislature deliberately refrained from providing a restrictive definition of what constitutes a portrait; its interpretation is left to case law and societal developments. This makes portrait rights ideally suited to absorb new manifestations such as AI cloning.

In Dutch law, portrait rights are not about technical aspects, but about recognizability. Case law has long since expanded the definition of portrait far beyond the classic photo or video. Lookalikes, imitations, and caricatures fall under portrait rights as soon as it is clear to the relevant audience who is meant. A realistic AI clone that deliberately imitates the face and voice of a real person easily fits within that framework. The fact that the clone is synthetically generated has little legal impact. After all, portrait rights do not protect the digital work, but the person behind the image.

Moreover, from a European perspective, a person's image is also considered part of the right to privacy and data protection (Article 8 ECHR and Articles 7 and 8 of the EU Charter). This gives people a right to control their own image and privacy.

An AI clone almost always constitutes a dual legal construct: both a portrait and personal data. As soon as an AI clone can be traced back to an individually identifiable person, the General Data Protection Regulation (GDPR) may also apply. Article 4.1 of the GDPR defines personal data as:

“any information relating to an identified or identifiable natural person (“data subject”); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

But even here, there are borderline cases, such as "cartoon-like" avatars or simply a mimicked voice without visuals. Regarding the latter, the Midden-Nederland District Court already ruled in 2020 that a voice is not subject to portrait rights, but is considered special personal data (namely, a biometric data) within the meaning of Article 9 of the GDPR. (2) . Since this is subject to a prohibition on processing, there must be a legal exception. In the context of AI cloning, this will often in practice be consent, or fall under the freedom of artistic expression.

Point of attention 2: Ownership and copyright

Who, from a copyright perspective, is the "creator" of your AI clone? Is it the organization that funded or initiated the AI ​​clone? Or is it the person on whom the AI ​​clone is based? Or is it something else entirely?

Copyright requires a human's own intellectual creation. AI systems are not (yet) recognized as authors in that sense (3) . The question is: which human actors made so many creative choices that led to the concrete result of the AI ​​clone in question?

In practice, the copyright on the appearance of an AI clone generally belongs (as usual) to the person who made the creative choices that resulted in their own intellectual creation. An AI clone rarely involves a single "work": it involves many creative choices from various perspectives. Think of scripts, voice, appearance, and animations, as well as personality and tone of voice. It's therefore a combination of choices made by the designers, developers, and creatives involved in developing the AI ​​clone. If this work is performed under an employment contract, the Copyright Act generally designates the employer as the copyright holder, unless otherwise agreed.

The person portrayed is therefore not automatically the creator, unless they demonstrably made substantial creative choices in the design. However, that person retains their portrait and personality rights. At the same time, an employer can obtain usage rights, especially if the digital human was explicitly developed for a job. For example, through the employment contract and additional IP/portrait provisions, it can be agreed that the employer may use the AI ​​clone for certain purposes and periods.

Point of attention 3: The employer-employee relationship

Employment relationships in particular demonstrate how strict the law imposes limits on the tradability of identity. When an AI clone is developed in the context of a role (for example, a CEO, journalist, presenter, or editor-in-chief), the employer typically holds a strong position vis-à-vis the employee.

Copyright on works created during employment generally belongs to the employer, and usage rights can also be broadly contractually stipulated. But this flexibility is not unlimited. Portrait rights, personality rights, and privacy remain fundamental employee rights. Even with prior consent, an employee can object to new or modified use if a legitimate interest is harmed, for example, in a context where the portrayed person's reputation is undermined. Where can this conflict arise? An example of conflicting interests is when the employer wants to use an AI clone of an employee to express certain statements or views that the employee does not support.

In such cases, explicit agreements are necessary, balancing the interests of the employer and the employee. The parties must also stipulate what will happen to the use of the AI ​​clone after the employment relationship ends. How long may the AI ​​clone continue to be used? Is additional compensation due, and if so, how much? And how will the parties handle the situation if the employee subsequently joins a competitor?

Point of attention 4: Liability

Who is responsible for what an AI clone "says"? In practice, the primary legal responsibility will usually lie with the party deploying the AI ​​clone, not the person on whom the clone is based.

The company that develops, trains, operates, publishes, and subsequently profits from the AI ​​system's reach is also, in traditional media and liability law, the party primarily liable for unlawful statements. This is especially true if the AI ​​clone is (partially) self-learning and you, as the subject, have no direct say over the specific wording: in that case, it's difficult to maintain that the subject personally "vouches" for every sentence.

In employment relationships, employers are additionally liable for damage caused by employees while performing their duties (Article 6:170 of the Dutch Civil Code). This doesn't mean that personal attribution to an employee is never an issue, but it does mean that the bar is high. Legally, the primary consideration is: did you give the impression that this AI clone is speaking on your behalf, and did you play a substantive role in what was said? Consider situations in which you actively help train the system, draft or approve texts, present the AI ​​clone as "my voice," and, despite obvious errors, do not distance yourself from the system.

In such a case, a judge could theoretically conclude that you are partly responsible for certain statements. However, in most normal situations (where the AI ​​clone is a tool owned by the employer or client, with training data, governance, and editing set up by them), personal liability will be limited. Therefore, it's important contractually and organizationally to explicitly stipulate who is responsible for training, prompts, monitoring, and publication, and to include an indemnity for any damages incurred by the person portrayed; this helps to clearly assign that responsibility to the right party.

Point of attention 5: The risk classes under the AI ​​Act

The AI ​​Act doesn't explicitly mention AI clones, but they are indeed materially covered if they generate synthetic audio or video of a real person. Organizations that use AI clones are explicitly held accountable for governance, oversight, and accountability.

The exact obligations depend on the risk class of the AI ​​clone and the organization's role under the AI ​​Act. The AI ​​Act includes a risk classification for AI systems. AI clones used for customer service in a webshop will fall under the classification of low-risk AI systems, while AI systems supporting medical diagnoses will be classified as high-risk.

Organizations that develop and market the underlying generative model or the complete "clone-as-a-service" product will be considered providers of the AI ​​system or AI model. Conversely, organizations that purchase an existing cloning tool, configure it for a specific person, and then deploy it on their own channels will be considered deployers of the AI ​​system. Often, a combination occurs: the party that developed the AI ​​clone and marketed it under its own name is the provider, the media company is the deployer, and sometimes a large media company also builds models itself, acting as both.

The risk class of the AI ​​system and the role of the organizations therefore determine the specific obligations. Low-risk AI clones are particularly subject to a transparency obligation: the public must be able to know that they are dealing with AI-generated or manipulated content. This obligation rests with both the provider and the deployer. In addition, broader requirements apply regarding documentation and information provision, risk management, logging, human oversight mechanisms, and safeguards against misleading or harmful use. For organizations, this means that an AI clone cannot be a purely technical project but must be implemented in a legally and organizationally regulated manner.

Providers of high-risk AI clones are subject to additional obligations, such as monitoring and risk management, quality management, and the preparation of an EU declaration of conformity. In addition, those responsible for using high-risk AI clones must, among other things, arrange for human oversight, ensure sufficient representative input data (where possible), and implement appropriate technical and organizational measures to ensure that the AI ​​clone is used in accordance with the accompanying instructions for use.

Point of attention 6: Your own toolboxes against abuse

The rise of AI cloning is also accompanied by abuse. The technology has already led to serious infringements (think, for example, of deepfakes in pornographic material). If someone copies, manipulates, or uses an AI clone without permission, there are essentially three legal tools available:

  1. First of all, people can object to publication on the basis of portrait and privacy rights, and demand removal and compensation, especially if there is damage to reputation or an infringement of your personal privacy.
  2. In addition, people (if copyright or related rights apply to the original AI clone) can also take action against unauthorized copying or reuse under IP law.
  3. The general tort route also remains open for misleading or harmful deepfakes, for example, if the AI ​​clone is used for fraud, hate campaigns, or misleading advertising. In more serious cases (sexual deepfakes, threats, fraud), criminal penalties can also be taken and a report can be filed.
  4. Finally, in the context of deepfakes, it is worth mentioning that there is a bill that aims to regulate a neighboring right to protect individuals ('Neighbouring Right Act Deepfakes of Persons' (4) ). As a result of the proposal, every artist and every natural person, as well as their relatives, would have the right to prohibit or authorize the production, use, and distribution of deepfakes of their voice or appearance. The consultation on this bill ran until December 31, 2025.

On the other hand, the AI ​​Act also requires organizations using AI to develop policies regarding deepfakes, labeling, and the use of digital personas. Professional organizations can therefore no longer afford not to have an internal policy and incident protocol for AI clones and deepfakes.

Conclusion

AI clones don't revolve around a single legal area, but rather around a combination of issues such as portrait rights, privacy, copyright, employment law, and liability. This is precisely why things often go wrong in practice: organizations manage the technology, but not all legal aspects. Questions about ownership, usage rights, liability, or discontinuing an AI clone are rarely resolved with a standard clause. They require an understanding of mutual interests, clear communication, but also tailored legal advice, strategic advice, and a thorough understanding of all relevant legal areas.

_

(1) For example, Supreme Court 22 April 2022 (Verstappen/Picnic), District Court of Amsterdam 2 February 2005 (Kijkshop/Balkenende) and District Court of Amsterdam 9 August 2017 (Edgar Davids/Riot Games).
(2) District Court of Central Netherlands, 9 January 2020, ECLI:NL:RBMNE:2020:24.
(3) See, for example, the judgment of the Prague Regional Court, Czech Republic of 11 October 2023 (No. 10 C 13/2023-16).
(4) Can be found at: https://www.internetconsultatie.nl/zeggenschapoverdeepfakes/b1 .