This controversy, and the inaction of the Federal Election Commission and Congress, could mean that voters have limited federal protections against those who use AI to mislead the public or hide political messages in the final stages of a campaign. Emerging generative AI technology has already proven capable of creating eerily realistic images.
“AI could have a significant impact on our elections, and right now there is no regulation on this issue whatsoever,” said Ellen Weintraub, Democratic vice chair of the Federal Election Commission.
More than a dozen states have enacted laws regulating the use of AI in election campaigns, but Congress has yet to step in despite widespread concerns about AI's influence on the federal government.
Get caught up in
Summarised stories to keep you up to date
Adab Noti, executive director of the Campaign Legal Center and a former deputy general counsel at the Federal Election Commission, said the bureaucratic quagmire made it “highly unlikely” that federal restrictions on the use of AI by campaigns would be implemented ahead of the November presidential election.
“The cavalry is not coming,” he said.
AI deepfakes have targeted government officials and politicians this year. Democrat Steve Cramer was indicted last month for impersonating President Biden in an AI-generated robocall telling New Hampshire residents not to vote early. Soon after, the FCC banned AI-generated voice imitations in robocalls. Last week, a deepfake video surfaced purporting to show State Department spokesman Matthew Miller calling the Russian city of Belgorod a potential target for U.S. weapons attacks on Ukraine.
A major AI issue on the campaign trail could pose a headache for the Biden administration, which has made the rapid adoption of AI a centerpiece of its policies. Biden issued an executive order in October calling on various federal agencies to quickly develop regulations for their use of AI technology.
FCC Chairwoman Jessica Rosenworcel (Democrat) announced plans last month to consider rules that would require political advertisers to provide on-air or written disclosures when featuring “AI-generated content.”
But that plan was hit with a snag this week when the top elections commissioner and FCC member, both Republicans, accused the FCC's Democratic leadership of overstepping its authority.
In a letter to Rosenworcel, FEC Chairman Sean Cooksey wrote that the proposal oversteps the commission's role as the chief enforcer of federal election law, and that the FCC's maneuver would create “irreconcilable conflicts” with potential FEC rules and could open the door to litigation, Cooksey wrote.
The FCC's proposal has not yet been made public, but Rosenworcel said the move would not ban the use of AI. Instead, it should “make clear that consumers have a right to know whether AI tools are being used in the political ads they see.”
In an interview, Cooksey argued that introducing disclosure requirements so close to an election could cause public confusion about standards and do more harm than good.
“It will disrupt the political campaign and undermine the next elections,” he said.
Fellow Republicans, In Parliament and At the F.C.C. The House of Representatives balked at Rosenworcel's plan, with Rep. Cathy McMorris Rodgers (R-Wash.), chair of the House Energy and Commerce Committee, saying in a statement that the agency “does not have the expertise or authority to regulate political campaigns or AI.”
FCC Commissioner Brendan Carr (R) argued that because the rule only applies to television and radio political ads, not online streaming platforms like YouTubeTV or Hulu, the sudden addition of AI disclosures in some places and not others “will ultimately be very confusing for consumers.” Carr echoed Cooksey and called for the FCC to shelve the issue until after the election, if not indefinitely.
“First of all, the FCC should not introduce major changes to regulating political speech on the eve of a national election,” Carr said.
In a statement, Rosenworcel said the FCC has required sponsorship disclosure for election ads for decades, and adapting such rules as new technologies emerge is not new.
“The time to publicly disclose and take action on our use of AI is now,” she said. “While this technology has benefits, we also know it can be used to mislead the public with voices and images that impersonate people without their permission, leading to voter misinformation.”
The 3-2 majority allows Democrats on the FCC to ignore Carr's objections and move forward with the plan before the election, but the threat of legal challenges could stall the plan.
Without laws dictating how AI should be regulated, federal agency actions “will almost certainly be challenged in court in some form,” Noti said.
Multiple federal efforts aimed at curbing AI's influence on the 2024 presidential election face an uncertain fate in Washington, even as officials in both parties have warned about the technology's potential to wreak havoc on the electoral process.
The FEC is considering its own petition on the issue, which would explicitly ban candidates from using AI to intentionally misrepresent their opponents in political ads, but both Democratic and Republican FEC officials have expressed skepticism that the agency can weigh in on the issue and have urged Congress to write new rules to replace the proposal.
Unlike the FCC, the FEC is evenly split between the two major parties and has a rotating chairman, and the agency has often found itself in gridlock as election reform has become increasingly polarized.
In the federal parliament, senators have introduced a bill that would require AI-generated political ads to include disclaimers. for If top congressional leaders do not act on this issue, the time for Congress to act before Election Day is rapidly shrinking.
“It's good that federal agencies are considering how AI could subvert campaigns and elections, but we can't wait to put in place comprehensive guardrails to address these threats head-on,” said Sen. Amy Klobuchar (D-Minn.), who is leading the legislative effort.