One of the hallmarks of 2024 was the increased attention by regulators to emerging technologies such as Artificial Intelligence (“AI”). In November 2023, the Biden Administration announced new U.S. initiatives to “advance the safe and responsible use of Artificial intelligence.” The Federal Communications Commission (“FCC” or “Commission”) has spearheaded the Administration’s interest in regulating potentially abusive uses of AI, focusing on AI-generated calls, including the use of voice-cloning technology, and how they can be used in connection with robocall scams to target U.S. consumers. For example, the FCC released a Notice of Inquiry on November 16, 2023 requesting information on the use of AI in unwanted and unlawful telephone calls and text messages under the TCPA. On the enforcement front, in early February 2024, before the New Hampshire primary, the FCC issued a press release and cease-and-desist letter to an entity that initiated robocalls to New Hampshire voters using an AI-generated voice of President Biden.
Days later, the FCC unanimously adopted a Declaratory Ruling clarifying that calls made with AI-generated voices, such as voice cloning, are to be regulated as “artificial” voice calls under the Telephone Consumer Protection Act (“TCPA”). The significance of this ruling is that it classifies AI technologies that generate human voices within the regulated use of “artificial or prerecorded voice” under the TCPA. This means that the regulatory requirements applicable to any outbound telephone call to a consumer using an artificial or prerecorded voice—including obtaining prior express consent from the called party, providing identification disclosures of the calling party, and presenting required opt-out options if the call is for telemarketing—now apply to AI technologies that generate human voices.
This Ruling marks increased regulatory scrutiny toward the use of AI among federal government agencies and State Attorneys General. In addition to FCC regulation of AI for telephone calls and text messages under the TCPA, the Federal Trade Commission has also finalized a rule banning government and business impersonation fraud and proposed new protections to restrict AI impersonation of individuals. State Attorneys General also have an additional tool to prosecute voice cloning scams, since many state laws prohibiting illegal robocalls rely on robocall definitions under the TCPA.
In July 2024, the FCC ramped up its proposed regulation of AI-generated calls and texts, by adopting a rulemaking that proposed new rules for AI-generated calls and text messages. Comments were filed in October, 2024. The proposed rules would:
- Establish a new definition for an “AI-Generated Call” for outbound calls only, not use of AI technologies to answer inbound calls such as virtual customer service agents;
- Establish new consent and identification disclosure requirements for AI-generated voice artificial or prerecorded voice messages and autodialed text messages that incorporate AI-generated content. This would require a “clear and conspicuous” disclosure that the consumer’s consent to receive artificial and pre-recorded calls includes consent to receive AI-generated calls. It would also require a similar disclosure that consumer consent to receive autodialed text messages may include consent to receive AI-generated content. This would require call identification at the beginning of each such call –similar to that already required for pre-recorded or artificial calls—that messages using an AI-generated voice are using “AI-generated technology.”
- The NPRM proposed to exempt certain non-telemarketing AI-generated calls made by individuals with speech or hearing disabilities as serving the public interest.
- The NPRM sought comment on the development of and potential oversight of AI call detection and blocking technologies and what privacy implications might be triggered by the use of such technologies.
- The NPRM also sought comment on how the National Institute of Standards and Technology’s (“NIST”) AI Risk Management Framework (RMF) could inform the FCC’s understanding of the risks relating to the use of AI technologies to “combat unwanted and fraudulent calls.”
Industry commenters have urged caution and an incremental approach to regulation of AI-assisted communications. They note that the Commission’s February 2024 Declaratory Ruling has already established that an AI-generated voice is an artificial voice or prerecorded voice under the TCPA, requiring prior consumer consent to receive an artificial or prerecorded call. They argue that consumers already have protection from calls made using “voice cloning” technology, “deepfakes” or other AI-generated calling technologies that fall under the TCPA. They express concern that the FCC proposals requiring separate consent for “AI-generated calls” might cause significant confusion for consumers and callers and could chill innovation and beneficial use cases for AI. They argue that a new definition of AI-generated calls will generate uncertainty for lawful callers and will stifle innovative use of AI calling technologies that will “harm callers and consumers.” One possible approach would be to adopt a narrower rule requiring that a caller using AI to clone a human voice without the consent of the person whose voice is cloned disclose the use of an AI-generated voice at the beginning of the call to alert consumers.
The use of AI in the text message context does not raise the same consumer harms that undisclosed voice cloning may. For example, the use of AI to generate the content of a text message (an advancement on existing predictive software), involving no falsification of identity, is unlikely to “deceive the text recipient.” Should consent be required to use AI to read and prepare a response to a text?
Under existing FCC rules, consent is not required to send a confirming text when a consumer opts out of receiving text messages. So conformity with existing rules while not confusing consumers or stifling innovation are concerns that have been raised.
Another proposal is to create an “established business relationship” exception (already a recognized exemption for telemarketing calls made to registered numbers on the National Do Not Call Registry) to any additional AI-specific disclosures, arguing that an AI generated call or text to existing customers does not raise the same concerns of consumer deception by use of AI to commit fraud or scams.
Commenters also recommend that the FCC coordinate with industry solutions such as branded calling solutions (such as leveraging STIR/SHAKEN call authentication already provided by originating and transporting carriers) and Rich Call Data (“RCD”) services such as “Branded Caller ID” which will display caller, logo and call reason information to consumers to empower them to make “more informed decisions” about which calls to answer, including detecting and avoiding unlawful AI-generated calls. Future “watermarking” of AI generated voice content by hidden signals at the device level could support whitelisting of trusted providers of AI-generated communications.
With the Trump Administration about to take office in less than one month, and Commissioner Brendan Carr about to lead a Republican majority FCC as the next FCC Chairman, it remains to be seen whether regulation of AI-generated voice calls and text messages is a priority of the next Administration or what regulations may take shape. It is probably more likely that industry solutions will be given at least as much if not more weight than simply new regulatory mandates for AI-generated voice calls and texts.
1 Press Release, White House, FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence | The White House (Nov. 1, 2023)
2 This matter resulted in an FCC Enforcement Bureau assessed a proposed forfeiture of $2 Million assessed against a domestic telecom provider for attesting to 3,978 spoofed (falsely using the telephone number of a NH political operative) robocalls by carrying a “deepfake” generative AI voice message pretending to be President Biden that targeted New Hampshire primary voters two days before the 2024 Democratic President Primary Election. See, Notice of Apparent Liability for Forfeiture, In the Matter of Lingo Telecom, LLC, File No.: EB-TCD-24-00036425 (May 23, 2024) at: https://docs.fcc.gov/public/attachments/FCC-24-60A1.pdf
3 Declaratory Ruling, In the Matter of Implications of Artificial intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, CG Docket No. 23-362, FCC 24-17 (rel. Feb. 8, 2024).
4 See, e.g. Comments of ACA International, America’s Credit Unions, American Financial Services Ass’n., Mortgage Bankers Ass’n, and Online Lenders Alliance (“ACA Comments”)(Oct. 10, 2024) at 2; Comments of CTIA (“CTIA Comments”)(Oct. 10, 2024) at 8.
5 Id.
6 ACA Comments at 3; CTIA Comments at 8.
7 CTIA Comments at 9-10.
8 ACA Comments at 5.
9 Id.
10 See, 47 C.F.R. § 64.1200(a)(12).
11 ACA Comments at 7.
12 CTIA Comments at 11-12.
13 ACA Comments at 10.
14 See, Wall Street Journal, “How A Telecom Lawyer Climbed to the Top” (Dec. 26, 2024) at A4 (referring to Commissioner Carr’s criticism of the Biden Administration’s attempted reinstatement of Obama-era net neutrality rules as a “needless waste of time and resources.”)