Weaponized deepfakes

Weaponized deepfakes

For years, experts have warned that deepfakes—AI-generated videos, images, or audio recordings of people doing or saying things they haven’t actually done in real life—could be deployed in malicious ways. 

These dangers are now here. Improvements in deepfake technology, and the widespread availability of easy-to-use and cheap (or free) generative models, have made it easier than ever for anyone to fake reality in a way that’s increasingly difficult to spot.

We’re not just talking about AI slop, the often obviously fake content that has taken over the internet. Rather, weaponized deepfakes—from sexually explicit images to scam posts to political propaganda—may look startlingly real. There are already examples around the world of their inciting violence, trying to change minds (and maybe even votes), and generally sowing mistrust

That’s why experts worry that weaponized deepfakes will further crater critical thinking skills, as well as our trust in institutions and each other. This has dire effects for society and governance—and, of course, for the people targeted. As with many other examples of technology’s harms, the human impacts will weigh disproportionately on women and marginalized groups; though the technology has evolved in the past few years, a 2023 study found that 98% of deepfakes were pornographic in nature, and 99% depicted women. 

Just take Grok. Since Elon Musk launched the “edit image” function of this AI chatbot late last year, users have created millions of sexualized images, including many of children and women; one report estimated that 81% of these Grok-produced images depicted women. Despite widespread criticism, xAI’s initial response was to limit the feature to paying members; it has since blocked the nudity feature in jurisdictions where it is illegal. 

There’s also been an explosion of political deepfakes. The Trump administration, for example, has regularly produced and shared AI-generated images and videos. Not all of them are even meant to look real, but others appear to be designed to sway public opinion and even humiliate the person depicted. 

In January, meanwhile, Texas attorney general Ken Paxton shared a video appearing to show his opponent in the Republican primary for a US Senate seat, Senator John Cornyn, dancing with Representative Jasmine Crockett, a contender for the Democratic nomination. But this never happened—a fact the ad did not disclose clearly. 

Suggested solutions include instituting new technical safeguards and detection methods at the big AI firms, encouraging users to take more protective actions, and crafting new legislation or applying existing regulatory frameworks, like copyright law, to the issue. 

But these all have limits. Technical solutions can be bypassed; for instance, bad actors can simply switch to open-source models built without safeguards. Getting people to change how they behave, such as by watermarking photos or posting less personal information online, is simply unrealistic. Stronger regulations require enforcement—and while President Trump has signed legislation that criminalizes deepfake porn, his administration continues to post other types of harmful deepfakes. In late January, for instance, the White House shared an altered image of a Minneapolis civil rights lawyer, darkening her skin and changing her facial expression from one of calm to exaggerated crying.

The problem could get much worse—and soon. There are high-stakes midterm elections in the United States later this year, and the federal agencies that traditionally addressed elections-related information integrity have been weakened. So have many outside research groups dedicated to fact-checking and fighting election-related disinformation.