Meta Struggles to Control the Proliferation of Sexualized AI Deepfake Celebrity Images on Facebook

0
88
Meta Struggles to Control the Proliferation of Sexualized AI Deepfake Celebrity Images on Facebook

Meta has removed more than a dozen fraudulent, sexualized images featuring well-known female actors and athletes following a CBS News investigation that exposed a significant presence of AI-manipulated deepfake images on the company’s Facebook platform.

A multitude of fabricated, heavily sexualized images of actors such as Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, Scarlett Johansson, and former tennis player Maria Sharapova have been widely disseminated by various Facebook accounts, accumulating hundreds of thousands of likes and shares across the platform.

“We have taken down these images for breaching our policies and will persist in monitoring for further violating posts. This is a challenge facing the entire industry, and we are consistently enhancing our detection and enforcement technology,” stated Meta spokesperson Erin Logan in a message sent to CBS News on Friday.

An evaluation of over a dozen of these images by Reality Defender, a platform focused on identifying AI-generated media, revealed that many were deepfake images—where AI-generated, scantily clad bodies have replaced the actual bodies of celebrities in otherwise authentic photographs. Several images likely originated from image stitching tools that do not utilize AI, as noted by Reality Defender’s findings.

“Almost all deepfake pornography is disseminated without the consent of the subject being deepfaked,” Ben Colman, co-founder and CEO of Reality Defender, informed CBS News on Sunday. “Such content is proliferating at an alarming pace, especially since existing measures to curb it are rarely enforced.”

CBS News has requested comments from Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, and Maria Sharapova regarding this matter. Johansson chose not to comment, as per a representative for the actress.




Expert shows how to spot a deepfake created with AI
02:39

Under Meta’s Bullying and Harassment policy, the company restricts “derogatory sexualized photoshop or drawings” on its platforms. They also prohibit adult nudity, sexual activities, and adult sexual exploitation, while their rules are designed to prevent users from sharing or threatening to share non-consensual intimate imagery. Additionally, Meta has started using “AI info” labels to distinctly indicate AI-manipulated content.

However, concerns persist regarding the effectiveness of the tech company’s oversight of such content. CBS News identified numerous AI-generated, sexualized images of Cosgrove and McCurdy that remained publicly accessible on Facebook, even after alerts regarding the widespread sharing of such content—contrary to the company’s terms—were sent to Meta.

One particular deepfake image of Cosgrove, which was still active over the weekend, had been distributed by an account boasting 2.8 million followers.

The two actresses—both former child stars from the Nickelodeon series iCarly, which is owned by CBS News’ parent company Paramount Global—are the most frequently targeted for deepfake content, according to the analyses performed by CBS News on public figures’ images.

Meta’s Oversight Board, a quasi-independent entity comprising experts in human rights and freedom of speech, which makes recommendations for content moderation across Meta’s platforms, conveyed to CBS News in an email that the company’s existing guidelines regarding sexualized deepfake content are inadequate.

The Oversight Board referenced recommendations made to Meta over the past year, advocating for clearer rules by amending its ban on “derogatory sexualized photoshop” to explicitly include the term “non-consensual” and cover other photo manipulation methods, including AI.

The board has also suggested that Meta integrate its ban on “derogatory sexualized photoshop” with the company’s Adult Sexual Exploitation regulations, ensuring more rigorous moderation of such content.

When approached by CBS News about the board’s recommendations, Meta directed attention to the guidelines on its transparency website, indicating that it has so far dismissed the suggestions. Nonetheless, Meta mentioned in its statement that it is still exploring options to denote a lack of consent in AI-generated images, and is contemplating reforms to its Adult Sexual Exploitation policies to “capture the spirit” of the board’s recommendations.

“The Oversight Board has clearly stated that non-consensual deepfake intimate images represent a severe infringement on privacy and personal dignity, disproportionately affecting women and girls. These images are not merely a misuse of technology—they constitute a form of abuse that can have enduring repercussions,” said Michael McConnell, co-chair of the Oversight Board, to CBS News on Friday.

“The Board is actively overseeing Meta’s response and will continue to advocate for stronger safeguards, quicker enforcement, and increased accountability,” McConnell added.

Meta is not the sole social media platform grappling with the issue of extensive, sexualized deepfake content.

Last year, Elon Musk’s platform X temporarily curtailed searches related to Taylor Swift after AI-generated fake pornographic images resembling the singer circulated extensively on the platform, garnering millions of views and impressions.

“Posting Non-Consensual Nudity (NCN) images is strictly forbidden on X, and we maintain a zero-tolerance policy concerning such content,” stated the platform’s safety team at the time.

A study released earlier this month by the U.K. government indicated that the prevalence of deepfake images across social media platforms is increasing rapidly, with forecasts suggesting that 8 million deepfakes would be shared this year, a significant rise from 500,000 in 2023.