The Chair of the Federal Communications Commission (FCC) Jessica Rosenworcel has proposed to her colleagues that the FCC investigate “whether the agency should require disclosure when there is AI-generated content in political ads on radio and TV.”   The Notice of Proposed Rulemaking, which is not yet public and still must be approved by a majority of the Commission, would seek to “increase transparency” by:

  • Seeking comment on whether to require an on-air disclosure and written disclosure in broadcasters’ political files when there is AI-generated content in political ads,
  • Proposing to apply the disclosure rules to both candidate and issue advertisements,
  • Requesting comment on a specific definition of AI-generated content, and
  • Proposing to apply the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.

The FCC’s announcement of the proposal, which does not propose any prohibition of AI-generated content but is already opposed by Republican Commissioner Carr, stated that “the use of AI is expected to play a substantial role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the potential use of “deep fakes” – altered images, videos, or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur.”

To that end, the FCC has proposed a $6 million fine against the alleged perpetrator of “illegal robocalls [which] carried a deepfake generative artificial intelligence (AI) voice message that imitated U.S. President Joseph R. Biden, Jr.’s voice and encouraged potential voters not to vote in the upcoming Primary Election (Deepfake Message).” The fine was based on violations of the Truth In Caller ID Act of 2009.

The FCC has had a particular focus on employment of AI-generated voices in automated calling campaigns, political or otherwise. In February of this year the Commission ruled that voice cloning using AI  constituted an artificial or prerecorded voice, requiring compliance with the relevant provisions of the Telephone Consumer Protection Act of 1991.

We expect the FCC to continue be vigilant regarding the evolving use of AI in the telecommunications environment. We will be tracking the FCC’s actions on the most recent proposal regarding political ads and will provide an update if it is formally approved.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Photo of Paul Besozzi Paul Besozzi

I have practiced in the telecommunications regulatory field, including before the FCC and state regulatory agencies, for some 35 years. This has included advising clients on all manner of compliance, rulemaking, enforcement and legislative issues relating to the Telephone Consumer Protection Act and…

I have practiced in the telecommunications regulatory field, including before the FCC and state regulatory agencies, for some 35 years. This has included advising clients on all manner of compliance, rulemaking, enforcement and legislative issues relating to the Telephone Consumer Protection Act and Junk Fax Act, particularly before the FCC which develops the regulations implementing those statutes. My efforts include reviewing clients’ technology and TCPA compliance plans to determine whether they meet FCC requirements and advising on strategies for raising issues with the FCC.