
Using registered trade marks as a means to combat deepfakes
Learn more

Gareth Price

Alain Godement

Deepfakes are created by artificial intelligence (“AI”) algorithms that are fed hundreds or thousands of photos and videos of a person. By analysing this data, the AI learns to replicate their exact likeness, expressions, and even their voice with startling accuracy. This digital model is then superimposed onto a source video, effectively creating a "digital puppet" that can be made to say or do anything the creator desires.
The technology is no longer exclusive to visual effects studios; it's increasingly affordable, accessible, and requires less source material to create a convincing fake.
Commercially, these fakes are a powerful tool for scammers and unauthorised advertisers. A deepfake video of a well-known actor, singer, sports star, or politician giving a "personal" endorsement for a fraudulent investment scheme or a low-quality product will directly leverage their credibility to steal from their fans.
Beyond outright scams, rogue companies use deepfakes in their advertising to attract customers without paying licensing fees, while digital content creators monetise sensationalist videos featuring likenesses. In every case, someone else is generating revenue other than the person whose image, likeness, or voice has been used. All they may get is reputational damage and a loss of control.
There is no single, specific “image right” or “right to personality” in the United Kingdom. Instead, the protection available consists of a patchwork of different laws that, when used together and strategically, create a framework for controlling how a person’s image is used.
This protects the goodwill associated with a person’s image. To succeed in a passing off action, the person must be able to establish:
On the face of it, passing off is a powerful tool to protect image rights. However, proving passing off is difficult to quantify financial loss.
For an emerging talent, it may be very difficult to establish goodwill in the eyes of a court. Also, bringing a passing off action before a court requires a significant investment in legal fees, with no guarantee of success. Generally, a passing off action will not be feasible for small-scale infringements.
An image of an identifiable person is personal data. This means that commercial use of the image is “processing” personal data and must comply with data protection law.
For almost all commercial uses, this will require the individual to give consent. If consent has not been given, the use of the image is likely unlawful. The individual has the right to object to the processing of the personal data by demanding its removal and reporting the unauthorised user to the Information Commissioner’s Office.
The Online Safety Act 2003 is a significant step forward, particularly regarding intimate deepfakes and fraud. However, its primary focus is platform compliance and safety duties rather than providing a private right of action for IP infringement.
Copyright protects the original work (e.g., the photograph or video used to train the AI), not the person’s face or voice itself. If an AI platform is trained on scraped data where copyright ownership is unclear, this route often fails.
The recent High Court Judgement in Getty Images (US) Inc v Stability AI has further exposed the fragility and limitations of copyright as a defence against AI use. The Court dismissed Getty Images’ secondary copyright infringement claims, establishing two critical precedents:
In this uncertain landscape, registered trade marks offer a clearer, more robust enforcement mechanism. By registering a name, signature, or even a faithful representation of a face or motion as a trade mark, celebrities convert their personal features into a registered right which can be enforced against third parties using deepfakes.
Deepfakes inherently attack the "origin" function of a brand—they lie about who is speaking or endorsing the message they are conveying to the public. A trade mark registration allows the owner to assert that the unauthorised use of their likeness in the course of trade causes confusion as to the origin of the goods or services.
Unlike passing off, there is no need to prove the existence of goodwill and that there has been a misrepresentation. If the deepfake uses the registered trade mark (the face/voice), or one deemed to be sufficiently similar, in relation to goods or services the same as or similar to those for which the trade mark is registered, there may be an actionable case for trade mark infringement.
Platforms like Meta, X (formerly Twitter), and TikTok have automated IP takedown mechanisms. A trade mark registration certificate can oftentimes be a "golden ticket" for these forms, allowing for rapid removal of content without the need for a complex legal explanation of passing off.
The Chelsea footballer has taken a proactive approach by filing trade marks not just for his name, but for his signature "shivering" goal celebration (a motion mark) and also a trade mark of his facial likeness. This creates a legal perimeter around his brand, allowing his team to strike down unauthorised merchandise or digital avatars that attempt to profit from his image.
The actor has filed trade mark applications in the US for his voice and specific catchphrases (like "Alright, alright, alright"). By registering these as sensory marks, he aims to prevent AI voice clones from being used in commercials without his consent, effectively treating his vocal identity as a protected brand asset.
Targeted by relentless deepfake crypto scams, Mr. Clarkson has registered an image of his face as a UK trade mark. This move was stated as a direct response to the inadequacy of other laws to stop scammers from using his image to defraud fans.
With a registered trade mark, it can be easier to file takedown notices against scam ads based on claims of trade mark infringement. Many automated online enforcement tools provided by platforms do not recognise passing off as a valid ground for complaint.
Use in the course of trade means not all deepfakes can be prevented with a registered trade mark, for example, use that is satirical. Trade mark infringement generally requires the infringer to use the mark in the course of trade. A deepfake created for satire, art, or political commentary (i.e. non-commercial use) may not infringe a trade mark. Trade marks cannot easily silence non-commercial speech.
To be registered, a trade mark must be distinctive. In Jan Smit v EUIPO, the court noted that a face is not inherently distinctive because it is merely a representation of the person, not a badge of commercial origin. Unless the image is stylised to a degree which renders it distinctive (like the KFC Colonel) or has acquired a "secondary meaning" (consumers see the face and immediately think of a specific product brand, not just the person), registration can be refused.
In the UK however, the position is that you can register an image of a face as a trade mark, offering a useful forum for celebrity enforcement against deepfakes.
A registered trade mark protects the specific image under the registration. A registration of a celebrity's face from 2020 may not be effective against a deepfake of them in 2040. To remain effective, celebrities would need to re-file face marks periodically as they age or if their facial features have changed significantly.
Trade marks operate on a "use it or lose it" basis. To remain valid, a trade mark must be put to genuine use, and not merely token use. Furthermore, there has to be a bona fide intention to use the mark. If a celebrity registers their face for "cosmetics" to stop deepfake makeup ads but never actually sells cosmetics, the mark could be revoked on the basis of non-use after five years. This would likely prevent celebrities from simply hoarding rights across all categories purely for defensive purposes.
As AI generation tools become commoditised, the legal system is playing catch-up. While we wait for a potential statutory "personality right," registered trade marks offer a robust, practical, and immediate defence for public figures.
Despite high hurdles regarding distinctiveness in some jurisdictions, like the EU, and genuine use, the ability to execute rapid takedowns makes registered trade marks of faces and likenesses essential components of any modern reputation management strategy.
Deepfakes are no longer a theoretical risk. If your name, image or brand could be vulnerable to misuse, our team can help you put the right protections in place. Contact Murgitroyd to discuss your trade mark strategy.
Meet the authors

About Gareth Price

About Alain Godement

Murgitroyd is a leading intellectual property firm supporting innovative businesses across a wide range of sectors. From patents and trade marks to designs, copyright, and IP strategy, their expertise extends beyond legal protection to helping organisations maximise the value of their ideas. Working across industries such as life sciences, engineering, technology, and creative sectors, Murgitroyd combines technical insight with commercial understanding to deliver tailored, forward-thinking solutions.