Australia’s eSafety Commissioner has confirmed a consistent pattern of deepfake image-based abuse occurring in schools, with official reports being filed at a rate of at least once per week. This data points to the tangible impact of generative AI technologies on the safety of students within the Australian education system. The reports involve the non-consensual creation and sharing of digitally altered images of students.
According to the information released, the number of complaints related to this form of abuse has doubled across Australia. This statistic highlights a significant and rapid escalation of the problem, moving it from an emerging issue to a regularly occurring form of harm affecting young people.
Frequency and Nature of the Incidents
The eSafety Commissioner’s office is now consistently managing at least one new case of deepfake image abuse from a school environment every week. These incidents are defined by the use of artificial intelligence to create convincing but fake images, often placing a student’s likeness into compromising or abusive scenarios. The sustained frequency of these reports demonstrates that this is an established and ongoing challenge for schools and safety authorities.
Official Data on a National Scale
The data from the eSafety Commissioner provides an official measure of the issue’s scale across the nation. The documented doubling in cases reflects a widespread trend rather than isolated events. By tracking these reports, the Commissioner’s office is able to quantify the growth of digitally created, non-consensual images used for harassment, bullying, and abuse among students. The findings are based on formal complaints lodged with the federal government’s online safety regulator.
Source: https://www.abc.net.au/news/2025-10-17/deepfake-image-based-abuse-doubles-across-australia/105905152