Bipartisan concern over AI-generated election interference has prompted a patchwork of laws across the country, as state lawmakers seek to blunt the impact of misinformation and keep deepfakes from overwhelming voters.
More than a dozen Republican- and Democrat-led states have enacted legislation this year to regulate the use of deepfakes – realistic fake video, audio and other content created with AI – in campaigns. The laws come amid warnings from the Department of Homeland Security over the ability of deepfakes to mislead voters and as questions remain over whether Congress can take meaningful action before November.
Florida, Hawaii, New York, Idaho, Indiana, New Mexico, Oregon, Utah, Wisconsin, Alabama, Arizona and Colorado have passed laws this year requiring disclosures in political ads with deepfake content. While Michigan, Washington, Minnesota, Texas and California already had laws regulating deepfakes, Minnesota updated their law this year to require a candidate to forfeit their office or nomination if they violate the state’s deepfake laws, among other provisions.
In states such as New York, New Mexico and Alabama, victims can seek a court order to stop the content.
Violators of deepfake-related laws in Florida, Mississippi, New Mexico and Alabama can receive prison time. Someone who violates Mississippi’s law with the intention to deter someone from voting or to incite violence or bodily harm can be sentenced to a maximum of five years, while the penalty in Florida is a first-degree misdemeanor, punishable with up to one year in jail.
Breaking the law could also lead to hefty fines in some states: in Utah and Wisconsin, violators can be fined up to $1,000 per violation, and in Oregon and Mississippi, fines can reach up to $10,000.
While there are currently paths for candidates to challenge deceptive ads, it’s too early to tell whether the laws will be sufficient when it comes to deepfakes, said Amy Beth Cyphert, a law lecturer at West Virginia University’s College of Law. AI poses a unique challenge, Cyphert said, because of the speed at which it’s evolving.
“I mean anyone, even with very little technological savvy, could probably create a deepfake if they knew where to look,” she added. “And then, you really have a whole new world.”
For Arizona state Rep. Alexander Kolodin, a Republican who sponsored one of the state’s new AI-generated content laws, the ability for deepfakes to create realistic voice depictions inspired him to move legislation allowing candidates to seek a court order declaring that manipulated content is a deepfake. Kolodin told CNN that an order is “a powerful tool” that can help candidates form a counter narrative to deepfakes that can spread quickly online.
“I think what we have to understand is that there has been lies in politics for as long as there’s ever been politics, right? This is new technology, but it’s not a new issue,” he said.
Kolodin, who says that AI still has a place in politics, used ChatGPT to draft a portion of the bill that describes “digital impersonation.” Arizona Democratic Gov. Katie Hobbs signed his proposal into law in May, along with another AI bill that requires disclosures in campaign ads.
Big Tech has already taken some steps to moderate deepfake content. TikTok and Meta (the parent company of Instagram, Threads and Facebook) announced plans in recent months to label AI content, while YouTube requires creators to disclose when videos are AI-created.
Despite momentum in statehouses, “the story is not optimistic on the federal side,” according to Robert Weissman, president of Public Citizen, a group that has pushed for statewide action and tracked progress on legislation regulating deepfakes in elections.
While bills that would require deepfakes to be clearly labeled have been introduced in Congress, there are few signs that lawmakers will act on the issue before November. While Senate Majority Leader Chuck Schumer supports the legislation, Minority Leader Sen. Mitch McConnell has argued that the “well-developed legal regime” that exists to take down deceptive campaign ads can be “easily” applied to deepfakes. In the House, bipartisan legislation has stalled in committee.
Absent congressional action, the task falls to agencies like the Federal Election Commission and Federal Communication Commission to try to regulate AI in campaign ads.
Spurred by fears that deepfakes could defraud voters, Public Citizen petitioned the FEC to act last year. The agency has yet to issue a rule to regulate the use of AI-generated deepfakes in election ads, and Weissman said his group is “not optimistic.”
In a statement to CNN, FEC Chairman Sean Cooksey said that he expects the agency’s rulemaking to finish later this year.
The FCC, for its part, unanimously voted earlier this year to outlaw the use of AI-generated voices in robocalls and said in late July it would move forward with a proposal to require AI disclosures in political TV and radio ads. It’s not immediately clear whether the agency would finalize the rules before the election. FCC Chair Jessica Rosenworcel intends to follow the regulatory process but “has been clear that the time to act is now,” Jonathan Uriarte, a spokesperson for Rosenworcel, told CNN.
Not all of the bills introduced at the state level made their way to governors’ desks following fights over their scope and reach. According to Public Citizen, deepfake related bills were introduced in more than 40 US states in 2024.
In Georgia, a battleground state that helped decide the fate of the 2020 presidential election, there’s no law preventing political ads featuring deepfakes from airing without a disclosure. While there was bipartisan support for one in both chambers, it was eventually tabled in the Senate after it was passed out of the House.
State Rep. Dar’shun Kendrick, a Democrat who sits on the committee that helmed the bill, told CNN that she wished that it would have passed. When asked whether she’s concerned about a state like Georgia not having a law going into the election, she said that “there’s always going to be bad actors.”
“Hopefully, if we do see any, they will be quickly dispelled or taken down or there’ll be some corrective action,” Kendrick added.
For now, states are using other ways to protect against harmful deepfakes. Arizona election workers are being trained to recognize deepfakes as part of election preparation, and in New Mexico, the state’s secretary of state has launched a campaign to educate voters on how to spot them.
The state-led campaign can only bolster New Mexico’s disclosure law, said Alex Curtas, a spokesperson for New Mexico’s secretary of state.
“These things are going to have to work hand-in-hand,” he said.
CNN’s Oliver Darcy, Sean Lyngaas, Donie O’Sullivan and Yahya Abou-Ghazala contributed to this report.
Read the full article here