[ad_1]
Students at a middle school in Beverly Hills, California, used artificial intelligence technology to create fake nude photos of their classmates, according to school administrators. Now, the community is grappling with the fallout.
School officials at Beverly Vista Middle School were made aware of the “AI-generated nude photos” of students last week, the district superintendent said in a letter to parents. The superintendent told NBC News the photos included students’ faces superimposed onto nude bodies. The district did not share how it determined the photos were produced with artificial intelligence.
“It’s very scary, because people can’t feel safe to, you know, come to school,” a student at Beverly Vista Middle School who did not want to be identified told NBC Los Angeles. “They’re scared that people will show off, like, explicit photos.”
Beverly Hills Police Lt. Andrew Myers told NBC News that police responded to a call from the Beverly Hills Unified School District late last week and took a report about the incident. A noncriminal investigation is underway, Myers said. Because the investigation involves juveniles, Myers said, no further information can be shared.
The Beverly Hills case follows a series of similar incidents involving students’ creating and sharing AI-generated nude photos of their female classmates at high schools around the world. A New Jersey teen victim spoke about her experience in January in front of federal lawmakers in Washington, D.C., to advocate for a federal law criminalizing all nonconsensual sexually explicit deepfakes. No such federal law exists.
In a letter to parents obtained by NBC News, Beverly Hills Unified School District Superintendent Michael Bregy characterized the deepfake incident as part of “a disturbing and unethical use of AI plaguing the nation.”
“We strongly urge Congress as well as federal and state governments to take immediate and decisive action to protect our children from the potential dangers of unregulated AI technology,” Bregy wrote. “We call for the passing of legislation and enforcement of laws that not only punish perpetrators to deter future acts but also strictly regulate evolving AI technology to prevent misuse.”
Bregy told NBC News that the school district would punish the student perpetrators in accordance with the district’s policies. For now, he said, those students have been removed from the school pending the results of the district’s investigation. Then, Bregy said, student perpetrators will be punished with anything from suspension to expulsion, depending on their level of involvement in creating and disseminating the images. Outside the district, however, the path to recourse for student victims is less clear.
In 2020, California passed a law that allows victims of nonconsensual sexually explicit deepfakes to sue the people who created and distributed the material. A plaintiff can recover up to $150,000 in damages if the perpetrator is found to have committed the act with malice. It’s not clear whether damages have ever been awarded under the law.
The president of the Cyber Civil Rights Initiative, Mary Anne Franks, a professor at George Washington University Law School, said California’s laws still do not clearly prohibit what happened at Beverly Vista Middle School — based on the information that is available about the incident. Not all nude depictions of children are legally considered pornographic, so without more information about what the photos depict, their legality is unclear.
“The civil action in California could potentially apply here, but it’s always difficult for victims to identify who the perpetrators are, get the legal assistance they need and then actually pursue the case,” Franks said.
“It’s hard to think about what justice would be for the students,” she continued. “The problem with image-based abuse is once the material is created and out there, even if you punish the people who created them, these images could be circulating forever.”
The technology to create fake nude images has rapidly become more sophisticated and accessible over the past several years, and high-profile incidents of celebrity deepfakes — like ones of Taylor Swift that went viral in January — have brought even more attention to consumer apps that allow users to swap victim’s faces into pornographic content and “undress” photos.
In deepfake sexual abuse cases involving underage perpetrators and victims, the laws have not always been applied.
The digital news outlet 404 Media investigated a case last year involving a high school in Washington state, where police documentation revealed that high school administrators did not report students making AI-generated nude photos from their classmates’ Instagram photos. The incident was a possible sex crime against children, the Washington police report said, but administrators tried to handle the situation internally before multiple parents filed police reports. After police investigated, a prosecutor declined to press charges.
“My hope is that legislators start realizing that while civil penalties may be useful for certain victims, it’s only going to be a partial solution,” Franks said. “What we should be focusing on are deterrents. This is unacceptable behavior and should be punished accordingly.”
[ad_2]
Source link