
On social media, it only takes minutes for a convincing video to go viral—and that includes fake ones. In 2023, a fabricated AI-generated audio clip of President Biden appeared to discourage voters from going to the polls in a key primary state. The clip, widely shared before being debunked, raised alarms about the growing threat of deepfake technology in elections. Now, lawmakers are asking: Can we stop these AI forgeries before they undermine democracy again?
A new Senate bill aims to do just that: outlawing the distribution of political deepfakes within 60 days of an election. But enforcing this policy presents a new and complex challenge at the intersection of ethics, policy, and technological innovation.
What Are Deepfakes—And Why Are They Dangerous?
Deepfakes use artificial intelligence to manipulate audio, video, or images, creating highly realistic media that can make it seem like someone said or did something they never did. While some applications are harmless—think novelty celebrity impressions or digital art—deepfakes have also been weaponized.
In electoral contexts, these synthetic media can be used to spread misinformation. For example, fake videos could depict a candidate making inflammatory remarks or appearing to support a controversial position. And with tools like generative AI now easily accessible via open-source platforms, malicious actors no longer need Hollywood budgets to pull it off.
That’s why bipartisan concern is growing in Congress. The proposed Senate bill would prohibit the use of AI-generated deceptive media in political campaigns during the final two months leading up to an election—a period when public influence is at its peak. Supporters argue this is a critical window where voters must trust what they see and hear.
Global Pressure—and Local Momentum
The U.S. isn’t alone in trying to stem the tide. The European Union now requires clear labeling on AI-generated media to help the public identify synthetic content, particularly in civic settings (Responsible AI). Meanwhile, Japan has introduced punishments for deepfakes that damage a person’s reputation, and Brazil has criminalized deepfake use during elections altogether.
Within the United States, state governments are also stepping in. California recently signed legislation requiring watermarking on AI-generated images and videos, especially those of a sensitive nature like non-consensual pornographic deepfakes (Governor Newsom Press Office). South Dakota, among others, is actively pushing for legal definitions of AI and deepfakes as part of broader regulatory frameworks (SDPB).
According to experts, while a national policy is needed, harmonizing state and federal efforts won’t be easy. Some states already have deepfake laws in place for election-related content (Thomson Reuters), while others are just beginning to explore their options.
A Ripple Effect Across the AI Ecosystem
Interestingly, the implications of this legislation extend beyond politics. Requiring labeling and disclosures for AI-generated content could spur innovation in AI-detection tools—benefiting industries like cybersecurity, journalism, and digital forensics.
Players in the AI field may soon be encouraged—or even required—to embed watermarking or metadata into synthetic media. These technical safeguards could evolve into standard practice, making it easier to trace digital creations back to their sources.
What’s more, stricter scrutiny around AI tools before elections may prompt political campaigns to rethink how they use emerging technologies. With the threat of penalties looming, campaigns might turn away from manipulative AI tactics altogether and focus on transparency. That shift could rebuild public trust in both AI and the electoral process.
And the tech world is paying close attention.
“The growing public mistrust of AI—and the threat it poses to democracy—demands a proactive but thoughtful approach to regulation,” says data policy group Plural Policy.
The challenge, as many experts note, is finding the line between necessary regulation and stifling innovation.
Finding the Balance: Policy vs. Progress
Of course, regulating AI-generated content is only half the battle. Enforcement will be key. How will social media platforms identify and remove outlawed deepfakes in real time? What happens when a deepfake is created abroad but impacts a U.S. election? Those questions, among others, remain unanswered.
Public interest groups like Public Citizen are calling for clearer standards, better coordination with tech companies, and faster response mechanisms from electoral commissions. Their testimony in support of the Senate bill pointed to the urgent need for “democratic resilience in the digital age.”
As the 2024 election approaches, the clock is ticking—not just to pass legislation, but to ensure it’s effective.
One thing is clear: deepfakes are here to stay. But if lawmakers, developers, and watchdogs can align, there’s hope they won’t hijack the democratic process. Our challenge now isn’t just to stop the next viral lie—it’s to rebuild trust in what’s real.
Conclusion
If a single synthetic clip can sway an election, what does it say about the strength—or fragility—of our democratic perception in the age of AI? We’ve entered an era where truth can now be fabricated with frightening precision, raising a deeper question: should our focus be on policing the tools, or on redefining the trust we place in what we see and hear?
Deepfakes might be the symptom, not the sickness—exposing how vulnerable democratic systems are to manipulation when public skepticism is already high and facts are up for grabs.
As lawmakers scramble to draw hard lines around fast-moving technology, they may be missing the bigger picture: the next wave of disinformation may not look synthetic at all. Can we future-proof our elections if the reality itself becomes negotiable? The fight against deepfakes is about more than AI—it’s about whether democracy can adapt quickly enough to survive its own digital reflection.