In March, Republicans released a video of James Talarico, the Democratic candidate for a U.S. Senate seat in Texas, reading out his old tweets.
The catch? It wasn’t actually Talarico speaking. The National Republican Senatorial Committee released an artificially-generated video of Talarico standing in front of an American flag, reading five-year-old tweets in what sounds like his voice, just months ahead of what’s shaping up to be a close Senate race.
Welcome to politics in the age of AI.
Though scam calls and laughably fake videos and pictures existed in the last election cycle, artificial intelligence has significantly improved since 2024, and experts say it now has the potential to cause real harm in this fall’s midterms.

“It is easier now than it was last year and two years ago to produce, disseminate false information, particularly using generative AI,” said Thessalia Merivaki, associate professor at Georgetown University and an expert on elections and democracy. “The concern is that there’s a high possibility of those tools to be deployed to confuse voters, particularly closer to the election.”
More than half of Americans said they were not confident in their ability to detect AI-generated content in a September 2025 Pew Research Center study. Deepfakes, which are videos or images of a person that have been manipulated into something unreal via AI, are a fast-growing driver of misinformation.
Loreben Tuquero, who covers AI and misinformation at PolitiFact, said AI was not the main source of misinformation in 2024. Back then, most of the misinformation came from text-based claims, as AI-generated content generally didn’t look real enough at the time, Tuqero said.
But this year, the Republican U.S. Senate campaign of Rep. Mike Collins produced an AI-generated fake video that showed Democratic Sen. Jon Ossoff of Georgia supposedly pledging his loyalty to Senate Minority Leader Chuck Schumer. And last month, a group opposed to Virginia’s referendum on congressional reapportionment released an ad that features an AI-generated woman who looks like Gov. Alison Spanberger – who backed the referendum – burning down a barn.
As the technology improves, AI has the potential to cause more harm, Tuquero said.
“There’s … instances of AI that could lead to voters believing that something is true. On its face, they don’t seem to be fake,” Tuquero said. “President Trump himself uses AI a lot or shares AI generated content on his Truth Social account.”
Ahead of the 2024 election, 82% of Americans said they were at least somewhat concerned that artificial intelligence would be used to create and distribute fake information about presidential candidates, according to a Pew Research Center study.
Merivaki has been researching how information flows in digital spaces, especially around election cycles. She said social media platforms prioritize low-quality information and content that produces strong emotions to encourage engagement.
This can lead to an “uneven playing field” for authoritative and unreliable sources who aim to infiltrate information networks, she said.
Merivaki said it’s easier and cheaper than ever for people to create videos or images using generative AI. If used in certain ways, artificially-generated content about candidates could confuse or dissuade voters, especially if deployed close to elections.
“Deepfakes and other gen AI did not play a major role in shaping voter attitudes or manipulating tremendously the election environment,” in the past, Merivaki said. “There was a lot of deepfakes, and they were easy to detect. This is not the case anymore. Generative AI is very convincing.”
Sarah Oates studies political communication and democratization at the University of Maryland’s Philip Merrill College of Journalism, which created Capital News Service and the Local News Network, which produced this story. Oates said that as we move into the AI age, voters and media-consumers will be faced with a choice: to become either a cyborg or an android.
“Are you going to take all these affordances that communication technology gives you and … use that in a rational and human way to try to build a better world?” – in other words, be a cyborg, a being with both living and mechanical body parts. “Or are you just going to become a cog in a machine in which you have no agency?” – like an android, or robot.
Oates, associate dean of research at the Philip Merrill College of Journalism, said being a cyborg in today’s information landscape is difficult. But Americans have “a lot of choices” when it comes to avoiding misinformation online, Oates said, such as choosing which news outlets they consume and which social media apps they are on.
Tuquero has several tips for avoiding deepfakes and other AI-generated content online.
AI-generated videos are generally short, around eight seconds long, so the length can be a key indicator that something is fake.
AI can also make certain small things, like teeth or hands, look strange and non-human, so Tuquero suggests looking at the details of videos rather than just watching them on a surface level.
Above all, Tuquero suggests to scroll with skepticism and to dig a little deeper when something looks off, seems untrue or elicits a strong emotion.
“It’s helpful to know what it (generative AI) can and can’t do. At the moment, it’s advancing so fast,” Tuquero said. “Just be aware of the possibility that something you’re seeing, that’s something you’re watching, and that you like might be fake, and just develop steps for yourself to check if something is real.”
Tuquero said it’s also important to be wary of claims from public officials that videos or photos are AI-generated when they might actually be real. It’s called the “liar’s dividend,” when public figures claim that a real piece of media is artificial to avoid taking accountability.
AI detectors can also be useful, but they aren’t 100% accurate. According to an August study conducted by Gökberk Erol and his colleagues, AI detectors show moderate to high success rates, but false positives still happen. None of the detectors they tested were 100% reliable.
Matthew Wright, the principal investigator of the DeFake Project, said state-of-the-art commercial detection tools right now are just “so-so.”
The DeFake Project at the Rochester Institute of Technology is currently working to improve AI detection by developing a new all-in-one tool for journalists and other professionals to be able to detect AI-generated content across media types with ease, so that they can disseminate correct information to the public.
“It’s not about necessarily changing anybody’s mind, it’s just trying to get the real information out there,” said Wright, a professor and the chair of cybersecurity at RIT.
Many lawmakers across the country are also working to help curb the effects of deepfakes on elections.
In Maryland, Del. Jessica Feldmark (D-Howard) sponsored legislation in the House of Delegates that would authorize the state administrator of elections to take action against election misinformation online, including deepfakes.
“Election misinformation and disinformation are among the most serious threats to our democracy,” Feldmark said during a February committee hearing. “They suppress voter participation, create confusion and undermine trust and confidence in our elections. And with the use of generative artificial intelligence, the potential threat posed by election misinformation and disinformation, deepfakes, is growing exponentially.”
The General Assembly passed a version of Feldmark’s legislation, which is now awaiting Gov. Wes Moore’s signature.
Twenty nine states now have laws regulating deepfakes in political messaging, according to the National Conference of State Legislatures. Most of the laws require disclosures on media, such as political ads, if it contains AI-derived imagery. But most of the laws do not outright ban the use of deepfakes.
As state lawmakers begin to contend with the onslaught of AI-driven fakery, Wright said when in doubt, the public should turn to trusted, reputable news organizations.
“If… somebody posted it, and it’s just the video raw or from a source that you’ve never heard of and you’re not sure about, then you probably shouldn’t trust it,” Wright said. “But if it comes from a reliable source, I think you can feel confident that the journalists are doing their best.”
