The issue of deepfake nudes in schools has been percolating for awhile, but an investigation at Wired has some bad news: The problem is "much worse than you thought," the headline on the piece by Matt Burgess declares—and it's likely to intensify even more because of ever-improving and easily accessible tech. For the investigation, Burgess worked with digital-deception outlet Indicator, and they tracked a surge in AI "nudify" tools being used mostly by boys to generate fake sexual images of classmates in a few clicks. The images look real enough to humiliate—and they spread fast. The investigation's review of publicly reported cases found incidents at roughly 90 schools in at least 28 countries since 2023, with more than 600 identified victims. But keep in mind, these were only the reported cases.
"I think you'd be hard-pressed to find a school that has not been affected by this," Lloyd Richardson of the Canadian Centre for Child Protection tells Wired. What makes this different from earlier forms of harassment is the speed and accessibility—it's not the work of hackers or niche forums but classmates with apps anyone can download. Essentially, the tech has outpaced the ability of schools and law enforcement to respond. The story details how current responses vary wildly: Some students face felony charges of handling child sexual abuse material, while others get suspensions or no punishment at all. Victims, meanwhile, describe shame, anxiety, and fear that the images will follow them indefinitely. Read the full story, which notes that some schools are allowing students to opt out of yearbook photos to prevent the images from being co-opted.