刊物介绍
《人工智能资讯周报》探讨人工智能对公共政策、治理和政策建议的影响,探索人工智能对商业、政治和社会的影响,以确定潜在的研究领域,探讨可能的合作研究和机构伙伴关系。本刊着重提供中国人工智能发展动态和对人工智能的思考,同时关注全球范围内人工智能相关研究动态。本刊旨在通过可靠的研究,来帮助企业、研究机构和公民预测和适应技术引领的变化。和成就。自2017年起,每年的5月30日被设立为“全国科技工作者日”,以鼓励和支持科技工作者的创新和贡献。
作者:陈楚珩
主编:刘仪
Abstract
Since 2023, the issue of deepfake technology in U.S. election campaigns has become increasingly prevalent. Both federal and state governments have introduced various bills addressing this concern, focusing mainly on the requirement to disclose AI-generated content. These bills differ in scope and specific provisions, such as time limits, based on the states' individual circumstances. However, the slow progress of federal legislation, potential interference with state laws, and implications for free speech have resulted in a legislative impasse regarding AI in elections, with little advancement expected in 2024.
图源:Times
In April 2023, the Republican National Committee released an entirely AI-generated advertisement depicting the future of the U.S. under President Biden’s re-election. The ad disclosed in fine print that it was created by AI, featuring realistic but false images, including boarded-up storefronts, armed soldiers patrolling the streets, and panic due to rising immigration. In July, a Republican super PAC used AI voice cloning to imitate former President Trump's voice, making it appear as though he was narrating social media posts he never actually made. Since then, various states and federal agencies have proposed legislation aimed at regulating AI's impact on political elections.
I.Federal and
State Legislative Progress
Federal Level Legislation
On July 27, 2023, Senator Brian Schatz (D-HI) introduced the "AI Disclosure Act of 2023," requiring generative AI developers to label content produced by AI systems (including chatbots) to ensure users know when they are viewing AI-generated content or interacting with AI. This legislation applies not only to election activities but also to various forms of political communication. After being submitted to the Commerce, Science, and Transportation Committee, the bill has seen no further progress.
Subsequently, several bills were introduced in the Senate to regulate AI usage in elections. The "Protecting Elections from Deceptive AI Act" aims to prohibit media from disseminating substantially misleading AI-generated content related to federal candidates, especially in political advertising. The "AI Transparency in Elections Act" (S.3875) mandates that political ads containing AI-generated content include disclaimers indicating AI use. The "AI Preparedness for Election Administrators Act" requires the U.S. Election Assistance Commission to develop guidelines to help election officials manage the risks and applications of AI in elections, including training programs and best practices. These bills have been revised twice after being handed over to the Rules and Administration Committee and are scheduled for consideration on May 15, 2024.
The HR.4611 bill, introduced on July 13, 2023, prohibits the dissemination of substantially misleading audio generated by AI in political communications, with penalties of up to two years in prison, fines, or both. It was submitted to the House on the same day. The "Federal AI Risk Management Act" (HR.6936), introduced on January 10, 2024, focuses on managing AI risks, including its application in elections, requiring standardized risk management protocols, regular audits, and transparency measures. This bill was also submitted to several committees on the same day.
Mississippi
On April 30, 2024, the governor of Mississippi signed SB 2577, criminalizing the intentional digital spread of content intended to harm voters or affect election outcomes within 90 days of an election. The bill defines digital actions to include realistic alterations or creations of images or audio through deepfakes, with penalties of up to one year in prison and a $5,000 fine.
California
On September 17, 2024, California Governor Gavin Newsom signed into law three new bills to combat the spread of misinformation and deceptive election content. These regulations build on California's 2019 legislation (AB 730), prohibiting the distribution of manipulated videos, images, or audio of political candidates within 60 days of an election. The 2024 "Protecting Democracy from Deepfake Deception Act" (AB 2655) mandates that large online platforms fulfill certain removal obligations within 120 days before elections and disclose content afterward. The "Deceptive Media in Elections" bill (AB 2839) broadly bans the distribution of election communications containing certain misleading content. The "1974 Political Reform Act: Political Advertising: AI" (AB 2355) requires that AI-generated political ads include disclaimers.
Florida
On April 26, 2024, the governor of Florida signed House Bill 919, addressing AI use in political advertising. The bill stipulates that if political ads contain any AI-generated images, videos, audio, or other digital content intended to mislead voters or harm candidates, they must prominently disclose that they were AI-generated.
Michigan
On December 1, 2023, Michigan became the fifth state to require disclosure of AI usage in political ads. The legislation, signed by Governor Gretchen Whitmer, defines "AI" as a machine-based system that can make predictions, recommendations, or decisions impacting real or virtual environments based on human-defined goals. The law requires political advertisers using AI to state that their ads are "entirely or largely generated by AI," with clear audio and visual disclosures lasting at least three seconds for audio and four seconds for video.
New York
On May 10, 2023, the Governor of New York signed the Political AI Disclosure (PAID) Act, which requires political communications that use synthetic media to disclose that they were generated by AI. It also mandates that entities using synthetic media maintain records of such use. In mid-April 2024, New York Assembly Bill A9028 amended the election law concerning "political communications," enhancing protections against the illegal or unauthorized dissemination of false materials. The revised Section 14-106 of the election law requires any "individual, corporate association, company, campaign, committee, or organization distributing or publishing political communications" to label them as AI-generated if they know or should know that the communication has been modified by AI technology but appears to be genuine. Only political communications with such disclaimers are permitted to be published. If a candidate's "voice or likeness" is used in deepfake political communications without an appropriate disclaimer, the affected candidate may seek injunctive relief to prevent media distribution and publication, as well as recover legal fees.
Summary
From January 1 to July 31, 2024, 14 states in the U.S. enacted new laws or provisions to regulate the use of AI in political communications. According to Public Citizen, as of 2024, 21 states have passed enforceable regulations addressing deepfakes in elections, while six states are still debating relevant bills. Thematic analysis shows that 151 bills related to deepfakes and deceptive media in the electoral context have been introduced or passed so far in 2024, accounting for about a quarter of all laws regarding AI in general. Most of these bills (at least 100) specifically target AI deepfakes and other deceptive media practices that occur during political communications to the public. Generally, these bills cover deceptive communications about candidates or content created to mislead voters for or against a candidate.
However, the scope and targets of these bills vary widely, with differences in rationale, enforcement entities, subjects covered, and penalties for violations. A primary distinction is whether they completely prohibit the use of deepfakes and other manipulated media in political communications or allow their use as long as AI-generated content is disclosed. Additionally, in most states, there are explicit time limits on bans and disclosures, typically within 90 or 120 days before an election, while some states have no time constraints. Overall, states primarily require disclosure without completely banning the use of AI.
II. Legislative Controversies
Federal Legislation Challenges State Authority
Criticism of federal legislation comes from two angles. On one hand, there are concerns that federal laws could interfere with states' independent legislative authority, potentially compromising electoral fairness. Nebraska Republican Senator Deb Fischer, a senior member of the committee, remarked, "They are 'federalizing' issues that should belong to state jurisdiction." She supports states' efforts to regulate their own election laws. Generally, federal legislation tends to provide direction while state laws offer more specific regulations; however, when federal law is overly stringent, states can only make limited adjustments. Currently, states have clearly defined the scope and timing of laws to navigate challenges posed by the First Amendment. For instance, the bills in Texas, California, and Minnesota apply only to maliciously spread content that harms candidates, influences elections, or deceives voters. Other restrictions include provisions that only apply within 30, 60, or 90 days after an election. Some states exempt print publications, broadcasting companies, and in California, media related to satire or parody.
On the other hand, lawmakers worry about the impact of federal legislation on constitutional free speech protections. Fischer also expresses concern about the risks posed by AI but believes the bills are "too vague," stating, "They involve previously unregulated speech that goes beyond deepfakes." This has sparked further debate on whether regulating political deepfakes could impede protected speech, and whether existing protections like defamation laws are adequate to cover abuse.
Federal legislation stalls, maintaining a technology-neutral approach to regulation, and no new regulations will be enacted during this year's election.
This year, several AI and election-related bills passed review by the Senate Rules Committee but have yet to be submitted for Senate consideration. The Federal Election Commission (FEC) and the Federal Communications Commission (FCC) have differing views on whether to establish new regulations. The FCC has proposed requiring broadcast programs to disclose AI information but has faced rejection from the FEC. The FEC has stated it is unwilling to interfere with local and other legislative authorities: "Establishing rules to limit or prohibit AI in campaign communications exceeds the commission’s limited legal authority to regulate political advertising." The FEC approved a compromise proposal from Democratic commissioners Dara Lindenbaum and Shana Broussard, alongside Republican commissioners Trey Trainor and Allen Dickerson, with a 5-1 vote.
On September 10, 2024, the FEC announced that it would not create new regulations regarding the use of AI to generate deceptive content in federal elections during this election cycle. This does not signify a green light for deepfakes and other forms of electoral deception but rather adopts a technology-neutral regulatory approach, meaning they lack the authority to draft rules specifically applicable to AI or any other technology. The FEC stated that existing federal laws prohibiting "fraudulent misrepresentation" still apply to AI-generated content. The commission lacks the technical expertise to effectively regulate emerging technologies like AI. Therefore, specific rules targeting AI might not achieve their intended goals and could stifle innovative political speech or restrict political discourse in some respects. The FEC’s approach indicates that federal regulations will address harmful behaviors and activities rather than prescribing specific tools and technologies. This not only helps address challenges without sacrificing potential benefits but also provides a flexible framework to effectively adapt to future disruptive innovations. However, ongoing disputes among federal commissions and congressional inaction could lead to a lack of effective AI legal protections for voters before the 2024 election.
IV.Conclusion
Since 2023, the issue of deepfake technology in U.S. election campaigns has become increasingly prominent. Both federal and state governments have introduced various bills addressing this concern, primarily focused on the requirement to disclose AI-generated content, with states specifying additional provisions like time limits based on their situations. However, the slow progress of federal legislation, potential interference with state laws, and implications for free speech have resulted in a legislative impasse regarding AI in elections, with little advancement expected in 2024.
编辑:郭紫馨
责编:邹明蓁
海国图智研究院(Intellisia Institute)是中国第一批独立的新型社会智库之一。海国图智专注于国际问题研究,并主要聚焦中美关系、中国外交、风险预测、新科技与国际关系等议题,致力于通过书目与报告的出版、学术与社会活动的组织、研究项目的承接和开展等形式为政府、企业、媒体、学界社会公众提供知识资源,以帮助其更好地“开眼看世界”,了解中国与世界的关系,为其对外事务提供战略见解和政策解决方案。