The New York Law That Makes Advertisers Reveal Every AI-Generated Actor

The New York Law That Makes Advertisers Reveal Every AI-Generated Actor

What the New Law Actually Says

New York has taken a bold step in the world of advertising. The state has passed a law that requires companies to clearly tell the public when an actor or performer in their advertisement is not a real human being but instead a digital creation made using artificial intelligence. In simple terms, if a brand uses an AI-generated person to sell a product, they must say so openly.

This law focuses on what experts call synthetic media, which means any video, image, or audio content where a person has been digitally created or significantly altered using AI tools. The rule applies to a wide range of advertising formats, including television commercials, online video ads, social media promotions, and digital billboards.

The disclosure requirement is not just a small footnote in fine print. The law demands that the AI disclosure be clear, visible, and easy for the average person to understand. Advertisers cannot bury this information at the bottom of a screen or speak it too quickly to be noticed.

Why New York Decided to Act

The entertainment and advertising industries have been using AI technology to create lifelike digital humans for several years now. Some of these virtual performers look so realistic that most viewers cannot tell the difference between a human actor and a computer-generated one. This growing gap between what is real and what is artificial pushed lawmakers to take action.

There are two main concerns that drove this legislation forward. The first is consumer transparency. People have a right to know what they are actually watching. When someone sees a person enthusiastically recommending a product, they naturally assume that person is a real human being with real experiences. If that person is entirely AI-generated, the emotional and persuasive impact of the advertisement changes significantly.

The second concern involves protecting human performers. Actors, models, and other on-screen talent earn their living by appearing in advertisements. When companies replace them with AI-generated alternatives without clear disclosure, it threatens real jobs and the livelihoods of real people. The law sends a message that synthetic media cannot simply push human performers out of the picture without accountability.

How This Affects Advertisers and Brands

For companies that run advertising campaigns in New York or target New York audiences, this law creates new responsibilities. Here is a breakdown of what they now need to consider:

  • Identifying AI use early: Brands must know from the production stage whether any performers in their ads are AI-generated or have been digitally altered beyond normal editing.
  • Adding clear labels: Any advertisement featuring synthetic media must include a disclosure statement that is easy to read or hear.
  • Reviewing existing campaigns: Companies may need to review ads that are already running to check whether they comply with the new standards.
  • Working with legal teams: Advertising and marketing departments will likely need to partner more closely with legal experts to make sure every campaign meets the requirements.

Failure to comply with the law can result in financial penalties. This gives brands a strong reason to take the disclosure rules seriously rather than treating them as optional suggestions.

What Counts as an AI-Generated Actor

One of the more interesting parts of this law is how it defines what qualifies as an AI-generated actor. It is not just about fully digital characters created from scratch. The rules also cover situations where a real person’s image, voice, or likeness has been significantly altered using AI to look or sound like someone they are not.

For example, if a company takes footage of one actor and uses AI to replace their face with another person’s face, that would fall under the law’s definition of synthetic media. Similarly, if a brand creates a voiceover using AI to mimic a celebrity’s voice without that person’s involvement, the advertisement must disclose that the voice is AI-generated.

This broad definition is intentional. Lawmakers wanted to close loopholes that might otherwise allow companies to use AI deceptively while technically claiming they were not using a fully synthetic performer.

The Bigger Picture for Advertising Transparency

New York’s law is part of a much larger conversation happening across the country and around the world about how society should handle AI-generated content. As AI tools become more powerful and more affordable, the line between real and artificial is going to keep getting blurrier.

Several other states and countries are also working on their own rules around synthetic media and AI disclosure. New York’s approach could serve as a model for future legislation elsewhere. By setting clear standards now, the state is helping to shape what responsible use of AI in advertising might look like going forward.

Consumer advocates have largely praised the law as a necessary step toward greater honesty in advertising. Many argue that transparency should be the foundation of any healthy relationship between brands and the people they are trying to reach.

How Audiences Are Responding

Public reaction to the law has been mostly positive. Many people feel uncomfortable with the idea that they might be watching an AI-generated spokesperson without knowing it. Trust is a critical factor in advertising, and hidden use of synthetic media can damage the credibility of a brand if it comes to light later.

Research on consumer attitudes suggests that most people are open to companies using AI tools in creative ways, as long as those companies are honest about it. The issue is not necessarily the technology itself but the lack of transparency around how it is being used. Knowing that an advertisement features a digital performer rather than a human one does not automatically make it less effective, but hiding that information can feel like a form of deception.

What This Means for the Future of Advertising

The advertising industry is going through a major transformation. AI tools now allow brands to create content faster and at lower cost than ever before. Virtual influencers and AI-generated spokespeople are becoming more common, and that trend is likely to continue regardless of legislation.

What laws like New York’s are doing is not stopping that transformation but rather making sure it happens with a baseline level of honesty. Companies that choose to use AI-generated actors in their campaigns are still free to do so. They simply have to be upfront about it.

In many ways, this could actually benefit forward-thinking brands. Being transparent about AI use can position a company as trustworthy and innovative rather than sneaky or deceptive. Audiences who appreciate honesty may respond more positively to a brand that openly says it used AI technology than to one that gets caught hiding it.

Key Takeaways

New York’s advertising disclosure law is a significant development for anyone involved in marketing, media, or technology. Here are the most important points to remember:

  • Advertisers must clearly disclose when any performer in an ad is AI-generated or significantly altered using AI.
  • The disclosure must be easy to understand and cannot be hidden in fine print.
  • The law covers a wide range of synthetic media, including digitally created faces, altered appearances, and AI-generated voices.
  • Companies that do not comply face financial penalties.
  • The law is designed to protect both consumers and human performers from the unchecked spread of synthetic media in advertising.

As AI continues to reshape how content is created and distributed, laws like this one will likely become more common. The conversation about synthetic media, transparency, and consumer rights is only getting started, and New York has put itself at the center of that discussion.

Scroll to Top