A recent study from the Oxford Internet Institute has highlighted a troubling trend: deepfake image generators are now more accessible than ever. Researchers discovered nearly 35,000 such tools available on a popular online platform, signalling a marked shift in how deepfakes can be created.
Dr. Will Hawkins, a doctoral candidate spearheading the study—which will feature at the ACM Fairness, Accountability, and Transparency (FAccT) conference—shares that these models have been downloaded around 15 million times since late 2022. Perhaps most strikingly, about 96% of them are geared towards generating images of women, from high-profile celebrities to everyday social media users across countries like China, Korea, Japan, the UK, and the US.
The technology behind these deepfakes is surprisingly straightforward. Many of the models employ a method known as Low Rank Adaptation (LoRA), which requires only about 20 images of a target, a typical consumer computer, and roughly 15 minutes of processing time. This ease of creation raises serious concerns—not only because it simplifies crafting non-consensual intimate imagery (NCII), but also because it bypasses standard platform safeguards and, in many cases, violates legal boundaries set by countries such as the UK.
Hawkins stresses that this growing accessibility calls for stronger technical safeguards and better enforcement of platform policies. In the UK, for instance, sharing explicit deepfake images became criminalised in April 2023 under the Online Safety Act, and there’s now talk of extending these rules to cover the creation of such images as well, as part of discussions in the Crime and Policing Bill.
While the study focuses on models hosted on legitimate platforms, the simplicity and low cost of creating these tools hint at a darker possibility: even more harmful content—from child exploitation materials to other forms of abuse—might be circulating under the radar. By shining a light on this trend, Hawkins and his team hope to spur the development of tighter regulations and more robust technical solutions.
For a closer look at this critical issue, the report titled “Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators” will be available on arXiv starting May 7 and will feature in the ACM FAccT conference proceedings in Athens, Greece.