Deepfakes, Voice Cloning, and AI Impersonation: The Global Rules Are Already Here, and They Don’t Agree
A cloned executive voice. A fake endorsement. A synthetic campaign ad. A deepfake intimate image. Each of these can now trigger criminal liability, consumer-protection claims, platform-removal obligations, or identity-rights lawsuits—depending on where your business operates and which country’s law applies first.
Governments around the world are regulating AI-generated likeness, voice, and synthetic media, but they are not doing it the same way. Some are using transparency rules. Some are using criminal law. Some are moving through election law. Others are building out platform-removal duties or experimenting with personality-rights protections.
The result is a growing global patchwork. For businesses operating across borders, it is already a live compliance problem.
The world is not converging on one rule
The European Union is emphasizing transparency and disclosure under the AI Act. The United States just passed a federal law targeting non-consensual intimate imagery. China regulates synthetic-media providers directly, from labeling requirements to identity-manipulation rules. South Korea, Australia, France, Singapore, and Brazil have all taken more targeted approaches—often focused on sexual deepfakes or election integrity rather than broad identity ownership.
These are not variations on the same theme. They are different legal theories, different compliance obligations, and different risk profiles. A business cannot solve this by memorizing one country’s rule and assuming the rest of the world will follow.
Europe is layering obligations, not writing one clean rulebook
At the EU level, the AI Act’s Article 50 imposes transparency obligations for AI-generated or manipulated content, including deepfakes. That is a disclosure model—it does not create a general ownership right in one’s image or voice.
At the same time, the EU’s 2024 directive on combating violence against women and domestic violence adds pressure for stronger member-state responses to image-based abuse. That is a victim-protection and criminal-law-adjacent model.
France illustrates the narrower, harm-based approach. Its 2024 SREN law amended the French Criminal Code to explicitly prohibit certain non-consensual deepfakes. France did not try to reinvent identity as copyright. It targeted a specific abuse.
Denmark is the most conceptually ambitious example—it has proposed a copyright-law amendment aimed at stronger control over unauthorized digital reproductions of likeness and voice. But it is better understood as a proposal worth watching than as a tested legal regime.
For businesses, Europe is not one problem. It is layered obligations: disclosure, victim protection, targeted criminalization, and potentially broader personality-rights rules. That is harder to manage than a single rulebook.
The United States is fragmented but no longer standing still
The United States still does not have one clean federal framework for AI likeness misuse. For many uses of image, voice, and identity, you still have to navigate state-law publicity rights, privacy claims, unfair competition theories, tort law, platform policies, and sector-specific rules.
In May 2025, the TAKE IT DOWN Act was signed into law. It addresses non-consensual intimate imagery, including AI-generated deepfakes, and requires covered platforms to remove reported content and make reasonable efforts against reuploads. That is not a general federal deepfake law, but it is a meaningful national intervention in one of the most harmful categories of AI abuse.
U.S. risk now sits on multiple levels at once. Watch state-level identity rights, federal rules on intimate-image abuse, platform obligations, and expanding scrutiny around deceptive synthetic media.
The United Kingdom is moving through criminal law, not property rights
The United Kingdom has been moving through criminal-law and online-safety channels rather than through copyright or broad image-rights theory.
In January 2025, the UK government announced it would make creating sexually explicit deepfake images a criminal offense. The sharing offense created through the Online Safety Act framework was already in force. This shows a common pattern in common-law jurisdictions: rather than inventing a sweeping new property right in identity, lawmakers are expanding the law of abuse, sexual exploitation, and platform safety.
For businesses, liability often turns less on abstract authorship questions and more on whether the content is harmful, sexual, deceptive, or abusive—and how quickly your company responds when notified.
China is regulating the technology itself
China’s deep synthesis rules regulate providers and users of deep synthesis internet information services. They impose requirements around labeling, identity-related manipulation, and provider responsibilities. They prohibit certain harmful or misleading uses of synthetic content and require technical and governance controls.
The focus shifts upstream. In China, the question is not just whether a victim can sue later. It is whether the service was built, deployed, labeled, and controlled in a compliant way from the start. For companies offering AI tools or synthetic-media features in or into China, that is a much more operational compliance problem.
South Korea and Australia show how fast criminal law can move
South Korea has taken one of the toughest public stances on sexually exploitative deepfakes. Legislation passed in 2024 made viewing or possessing certain non-consensual sexual deepfakes illegal, alongside a broader crackdown on deepfake pornography.
Australia moved through targeted criminalization as well. Its Criminal Code Amendment (Deepfake Sexual Material) Act 2024 strengthened offenses targeting the creation and non-consensual sharing of sexually explicit material online, including material created or altered using AI.
These examples show how quickly governments move when deepfake abuse becomes politically urgent. What your company treats as a content-moderation issue today can become a criminal-law issue tomorrow.
Some countries are regulating deepfakes mainly through election law
Not every government is focused first on sexual abuse or celebrity impersonation. Some are treating deepfakes as an election-integrity problem.
Singapore amended its election laws in 2024 to prohibit the publication of online election advertising containing certain digitally generated or manipulated content about candidates, with the amendments commencing in January 2025.
Brazil’s Superior Electoral Court adopted Resolution No. 23.732/2024, updating electoral advertising rules to address AI and deepfakes in the election context.
For platforms, ad-tech companies, political consultants, media distributors, and AI tool providers, this is a separate compliance track. Synthetic content that is lawful in a commercial setting can be illegal in an electoral one.
Canada shows that “still developing” is not the same as “low risk”
Canada is a useful reminder that some jurisdictions are still unsettled. The federal government introduced Bill C-63, the Online Harms Act, but the bill died on the Order Paper when Parliament dissolved in January 2025.
A country without a finalized regime is not necessarily low risk. Those are often exactly the jurisdictions where new obligations appear quickly—after a major incident, a political shift, or a revived legislative push.
Stay on top of this globally, and do it continuously
If your business builds AI tools, uses synthetic media in advertising, hosts user-generated content, licenses talent, manages public-facing communications, or operates internationally across entertainment, gaming, social media, or e-commerce, you need a live view of how these rules are changing.
Not once a year. Not when a scandal hits. Not when outside counsel flags it after the fact. Continuously.
The reason is simple. One synthetic output can trigger multiple legal theories in multiple countries at the same time. A cloned voice can create fraud exposure. A fake endorsement can create consumer-protection and identity-rights claims. A synthetic campaign clip can trigger election-law rules. A deepfake intimate image can create criminal exposure and takedown obligations. A platform that labels content sufficiently in one market may still fail in another that cares more about consent, removal speed, or provider controls.
This is no longer a niche policy issue. It is ordinary cross-border compliance work.
What to do now
Map your exposure. Identify where synthetic likeness, voice, and avatar features appear in your business—marketing, customer support, creator tools, platform uploads, internal uses. If you do not know, your compliance function does not know either.
Track country by country. One global AI policy memo is not enough. The world is not moving in one direction, and what is compliant in one market may be criminal in another.
Audit your disclosure, consent, moderation, and takedown systems. In some countries labeling will matter most. In others, the key question will be consent, criminal exposure, or speed of response.
Break down your internal silos. Deepfake regulation cuts across product, marketing, legal, trust-and-safety, and public-policy teams. If those functions are not talking to each other, your exposure is larger than you think.
Assume more countries will move, not fewer. The trend line is clear even if the doctrines differ.
The bottom line
A lot of early commentary treated Denmark as the story. It is not.
The real story is that countries around the world are regulating deepfakes, voice cloning, and AI impersonation through different combinations of copyright, transparency rules, criminal law, election law, platform obligations, and consent-based restrictions. That patchwork is getting denser, not simpler.
The smarter move is to treat AI impersonation law the way serious companies eventually learned to treat privacy law: not as a legal backwater to be managed once a year, but as a fast-moving, cross-border compliance issue that touches products, platforms, marketing, operations, and reputation simultaneously.
The rules are already here. They just don’t agree with each other yet.






