40600
post-template-default,single,single-post,postid-40600,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.4,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Use of AI in Political Ads: Should It Be Regulated and by Whom?

Use of AI in Political Ads: Should It Be Regulated and by Whom?

With the presidential election less than one year away, there is growing fear that deepfakes and voices generated by artificial intelligence (AI) in political advertisements could create a “tsunami of disinformation”[1] that would impact the result of the election.[2] Political ads using AI have already been broadcasted. For example, in April, the Republican National Committee released the first entirely AI-generated ad depicting the dystopian society that would ensue if President Joe Biden was reelected.[3] Also, in June, Ron DeSantis’ campaign posted an AI-generated ad portraying former President Donald Trump hugging Dr. Anthony Fauci (although no such “hug” ever took place between the two men).[4] Although AI-generated advertisements may deceive the voting public, there is little consensus about what should be done to regulate AI or who should be responsible for regulating it.

Currently, no federal statute or rule specifically regulates the use of AI in political campaign advertising.[5] However, Congress and the Federal Election Committee (FEC) are both considering whether and how it should be regulated. This past May, The REAL Political Advertisements Act was introduced in Congress.[6] This proposed legislation, which is intended to promote greater accountability for the use of AI-generated content in political advertisements, requires ads containing an image or video generated by AI to disclose this fact.[7] However, neither the Senate nor the House have taken any further legislative action since the bill’s initial introduction. Another piece of legislation, The Protect Elections from Deceptive AI Act, is also currently being considered by Congress.[8]  This bill specifically prohibits the use of AI to generate deceptive content that falsely depicts federal candidates in political election ads.[9] However, if passed, this bill may be subject to more First Amendment challenges than the Federal Election Campaign Act.

Due to Congressional inaction, the FEC has been compelled to consider its role and whether it has authority to regulate AI-generated content in political advertisements. False statements in political advertisements, including untrue assertions regarding opponents, are generally protected under the First Amendment.[10]  However, section 30124(a) of the Federal Election Campaign Act (FECA) contains a provision that prohibits a candidate running for federal office from misrepresenting “himself . . . as speaking or writing or otherwise acting for or on behalf of any other candidate . . . on a matter which is damaging to such other candidate. . . .”[11] Under this provision, while federal candidate A may issue a statement claiming their opponent federal candidate B made a damaging assertion that candidate B did not in fact make, section 30124(a) prohibits candidate A “from issuing a statement that was purportedly written by federal candidate B, and which concerned a matter which [is] damaging to candidate B.”[12] Similarly, a candidate may not broadcast or issue an ad purporting to be made by their opponent when it is not.[13] However, no matter the content in the advertisement, a political ad will generally not run afoul of section 30124(a) as long as a disclaimer is included identifying the issuer of the advertisement.

There are questions about the applicability of section 30124(a) to political ads that include AI-generated images, audio, or video of an opposing candidate. According to the plain statutory language, as long as the ad includes a statement by the candidate “that identifies the candidate and states that the candidate has approved the communication”[14] the ad can make any representation about opposing candidates, whether true or untrue, without violating the FECA. However, this is not meant to preclude the possibility that a candidate could still face civil or criminal liability for fraud or libel for false or defamatory statements made about an opposing candidate.[15] While there is an interest in the public not being deceived by false information in political advertisements, there is a presumption that the voting public would not accept at face value negative claims made about a candidate if they know those claims are being put forth by a political rival. For this reason, along with broad deference to First Amendment protections of political speech, Congress and the FEC have been hesitant to regulate misrepresentations of political opponents in campaign ads. However, with the evolution of technology and the capacity for creating extremely convincing AI-generated images and videos, many are concerned that society will no longer be able to distinguish between reality and falsity in political advertising.

One non-profit advocacy group, Public Citizen, filed a Petition for Rulemaking requesting the FEC amend 11 C.F.R. §110.16 to make clear that it is prohibited for candidates or their agents to “fraudulently misrepresent other candidates or political parties through deliberately false [AI]  generated content in campaign ads and other communications.”[16] The FEC invited the public to comment on Public Citizen’s Petition which closed on October 16, 2023.[17] Some have commented in support of Public Citizen’s petition, urging the FEC to act, and have argued that section 30124(a) provides the FEC with the needed authority.[18] They also reference the FEC’s prior assertion that the Act “encompasses, for example, a candidate who distributes letters containing statements damaging to an opponent and who fraudulently attributes them to the opponent.”[19] These proponents maintain that disseminating “a deepfake impersonating another candidate or political party representative’s voice or image[, ]in a manner that is damaging to the candidate” is analogous to the example provided by the FEC and should therefore also be prohibited.[20]

On the other side, there are those who claim the FEC does not have authority to regulate the use of AI in political advertisements. FEC Commissioner Allen Dickerson has maintained that the FECA does grant the FEC authority to regulate AI-generated content in political ads.[21] He notes that the FECA does not prevent those running for office from making false claims about what an opponent has said or done and Congress has refused to change this.[22] Commissioner Dickerson also notes that regulating the use of AI could pose some serious First Amendment concerns.[23] Thus, there are some who argue the FEC’s hands are tied even if they acknowledge the dangers of permitting AI generated content to be used in political advertisements.

What action if any will be taken, remains to be seen. In the meantime, the presidential election looms over the horizon and political ads using AI are sure to require voters to discern for themselves what is real and what is not.

Footnotes[+]

Evan Dennis

Evan Dennis is a second-year J.D. candidate at Fordham University School of Law and works as Director, HRIS for Touro University. He is a staff member of the Intellectual Property, Media & Entertainment Law Journal and holds a B.A. in Philosophy from Rutgers University.