Hong Kong authorities have launched a criminal investigation at the University of Hong Kong after a law student allegedly used AI to create over 700 explicit images of more than a dozen female students and staff. The images were neatly stored in folders on the accused’s laptop, which was uncovered in February, sparking strong reactions from a community that feels let down by the initial response.
This case lays bare the challenges of regulating AI-generated deepfake pornography. While Hong Kong law punishes the nonconsensual publication of intimate images, it doesn’t specifically address the creation of such content. In contrast, legislation in the United States—signed under President Donald Trump—bans the nonconsensual online distribution of deepfakes, and South Korea has adopted even tougher measures by criminalising both possession and consumption of this material.
Students and faculty members at the university have voiced serious concerns, particularly since the accused has been allowed to continue attending classes despite the gravity of his actions. In response, the university issued only a warning letter and demanded a short, 60-word apology, leaving many to question whether stronger disciplinary action might have been necessary. Chief Executive John Lee has emphasised that educational institutions bear a responsibility to foster moral conduct and properly address misconduct when it occurs.
Looking ahead, the university has pledged to review the incident and introduce further measures aimed at securing a respectful academic environment. If you’ve ever felt frustrated by institutional red tape, this episode underscores the pressing need for clearer, more robust guidelines in the era of advanced AI.