AI has transformed the spread of misinformation, providing visualization that seemingly adds credence to narrative.
By: Sarah Ahmed
In his seminal 1922 book, “Public Opinion,” American journalist Walter Lippman asserts that the world is far too complex for citizens to possess direct knowledge of every single event. Instead, we rely on mass media coverage to construct our own distinct mental image of reality. Lippman argues that this “pseudo-environment” is influenced by emotionally charged, often stereotyped pictures of facts, which ultimately form the basis of public opinion.
While Lippman stated that “the only feeling that anyone can have about an event he does not experience is the feeling aroused by his mental image of that event,” he could not have predicted that nearly a century later, people would no longer simply imagine the narratives perpetuated by mass media. They now have the capacity to generate AI-produced videos that give tangible, visual form to the pseudo-environment.
If misinformation threatens democracy, its lifelike visualization only intensifies the danger. As hyperreal images and videos circulate faster than they can be verified, political discourse will increasingly be shaped by emotionally charged fabrications rather than verifiable truth.
Using Google’s VEO3 & OpenAI’s Sora, users can accelerate social division one inflammatory video at a time. Despite each company’s policy that prohibits “targeted political persuasion” or “misinformation,” there is overwhelming evidence that videos are being weaponized to propagateracist caricatures of minorities and reinforce harmful stereotypes about Black people.
When the Trump administration withheld funding of Supplemental Nutrition Assistance Program (SNAP) benefits during the government shutdown in late 2025, social media platforms were flooded with AI-generated videos reinforcing the racist Reagan-era stereotype of the “welfare queen.” Depictions included Black women shoplifting from grocery stores, screaming about needing benefits for their seven children with different fathers, or bragging about splurging with government aid.
Fox News initially reported on these videos as if they were real, stating how some beneficiaries were “threatening to ransack stores.” Conservative NewsMax anchor Rob Schmitt claimed people were using SNAP “to get their nails done, to get their weaves and hair.” Political commentators like Amir Odom echoed longstanding myths that fraudulent welfare recipients are exploiting the system.
The speed with which media figures weaponized these AI-generated videos to spread hateful rhetoric and advance political narratives, before confirming their authenticity, reflects a deeper crisis within modern digital culture.
Social media platforms, long criticized for amplifying inflammatory content, prioritize engagement above any moral obligation to promote societal cohesion or prevent stochastic terrorism. To keep users on the platform longer and drive engagement, algorithms trap users in feedback loops that mainly show content that already aligns with their ideological beliefs.
These echo chambers are digitally manufactured pseudo-environments so immersive they would make Lippmann roll over in his grave. With little exposure to opposing perspectives, many users accept information as immediately credible without feeling the need to verify it.
This begs the question: Would it matter if viewers are told the videos are fake? I would argue not. Lippmann warned that stereotypes are formed through “the casual fact, the creative imagination, the will to believe,” producing a counterfeit reality that provokes a “violent, instinctive response.”
Once an image aligns with what we are already primed to believe, its authenticity becomes irrelevant. It succeeds in invoking an emotional reaction precisely because it offers a visual glimpse into a reality people already feel is true.
Each of our perceptions of reality are distinct due to the biased filtering of information in the mediasphere—stereotypes that divide people into in-groups and out-groups.
While a healthy democracy is envisioned to include good-natured disagreement regarding policy, it relies on the assumption that citizens agree on a core set of values and are willing to compromise for the common good. Yet, political tribalism can only offer belonging by casting opponents as existential threats through narratives intentionally crafted to provoke emotion rather than deliberation.
As a result, American society has never been more polarized. A 2022 PEW Research poll revealed a drastic increase in partisan hostility, with 72% of Republicans and 63% of Democrats viewing the opposing party as more immoral than other Americans, compared to 47% and 35% in 2016. Animosity is fueled by a perception that the other side is morally bankrupt. A person who views SNAP as a lifeline for struggling families to feed hungry children will fundamentally clash with someone who believes most recipients are actually financially stable and bleeding taxpayers dry.
AI-generated videos aimed at validating preexisting resentments and deepening culture war divides render meaningful debate nearly impossible as citizens are now operating from entirely different perceptions of reality.
Society has become a spectacle you can materialize at the tap of your finger. And spectacle is far easier to manipulate than truth.
