Deepfake Abuse Escalates: AI-Powered Sexual Exploitation is Becoming More Realistic and Widespread

12

The proliferation of artificial intelligence has unleashed a disturbing trend: the rapid growth of explicit deepfake technology. Websites now offer tools that can generate realistic, nonconsensual sexual videos from a single photograph in seconds. These services, some operating with blatant disregard for consent, are making it easier than ever to create and distribute image-based sexual abuse, including child sexual abuse material (CSAM).

The Rise of ‘Nudify’ Ecosystems

For years, a hidden ecosystem of websites, bots, and apps has been developing, automating the creation of explicit deepfakes. These platforms often include graphic video templates, such as simulated sexual acts, and charge small fees for each generated clip. One service openly advertises the ability to transform any photo into a nude version using “advanced AI technology.” The unchecked availability of these tools is fueling a surge in digital sexual harassment.

Elon Musk’s chatbot, Grok, has been exploited to create thousands of nonconsensual “nudify” images, normalizing the process on a massive scale. Experts like Henry Ajder warn that the realism and functionality of deepfake technology are advancing rapidly. These services are likely generating millions of dollars annually while enabling a “societal scourge.”

Expansion and Consolidation

Over the past year, explicit deepfake services have introduced new features, including one-photo-to-video generation. A review of over 50 deepfake websites reveals that nearly all now offer high-quality explicit video creation, listing dozens of sexual scenarios women can be depicted in. Telegram channels and bots regularly release updates with new features, such as customizable sexual poses and positions.

The market is consolidating, with larger deepfake websites acquiring smaller competitors and offering APIs to facilitate the creation of more nonconsensual content. This infrastructure-as-a-service model allows the abuse to spread even further.

Accessibility and Open-Source Roots

What was once a technically complex process now requires minimal skill. The widespread availability of sophisticated, open-source AI models has made deepfake technology accessible to anyone with an internet connection. This ease of use is driving a surge in the creation and dissemination of nonconsensual intimate imagery (NCII).

The victims are overwhelmingly women, girls, and gender/sexual minorities. The harm caused by these images includes harassment, humiliation, and psychological trauma. Explicit deepfakes have been used to abuse politicians, celebrities, and ordinary individuals, including colleagues, friends, and classmates.

Slow Legal Response

Despite the growing problem, laws to protect people from deepfake abuse are slow to be implemented. The open-source nature of the technology makes enforcement difficult, while societal attitudes often minimize the violence against women that these tools enable.

The Role of Tech Companies

While platforms like Telegram have taken some action – removing over 44 million policy-violating pieces of content last year – the problem persists. Researchers note that the ecosystem thrives on the infrastructure provided by major technology companies.

As Pani Farvid, associate professor of applied psychology, observes, “We as a society globally do not take violence against women seriously, no matter what form it comes in.”

The increasing ease of use, normalization of nonconsensual imagery, and minimization of harm are creating a dangerous feedback loop. Perpetrators are sharing deepfakes privately in groups with dozens of people, often with little fear of consequence.

Ultimately, the unchecked growth of AI-powered sexual exploitation demands immediate attention. The current trajectory suggests that without effective regulation and societal change, this disturbing trend will only worsen.