Meta Platforms Inc. has filed a lawsuit against Hong Kong-based Joy Timeline HK Limited, the maker of CrushAI, alleging it ran more than 87,000 advertisements across Facebook, Instagram, Messenger, and Threads that violated the company's policies by promoting apps capable of generating non-consensual sexualized images using artificial intelligence.
The legal action, announced Thursday, follows an investigation by CBS News that uncovered hundreds of ads promoting AI deepfake apps still running on Meta's platforms, despite the company's prior efforts to remove them. Meta said it had deleted the offending accounts and blocked associated URLs but admitted its detection systems continue to struggle against the evolving tactics of such developers.
"This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it," Meta said. "We'll continue to take the necessary steps - which could include legal action - against those who abuse our platforms like this."
According to court documents filed in Hong Kong district court, Joy Timeline allegedly operated through a network of 170 business accounts and more than 135 Facebook pages to buy and run ads promoting its CrushAI apps. Meta stated that over 55 individuals managed the network, targeting users primarily in the U.S., U.K., Canada, Australia, and Germany.
The lawsuit cited ads that included sexualized or AI-generated nude images with captions like "upload a photo to strip for a minute" and "erase any clothes on girls." Meta noted it had incurred at least $289,000 in investigative and enforcement costs.
Meta's platforms have long prohibited ads containing nudity, adult content, or material that promotes sexual exploitation. But recent media reports - including from 404 Media, Faked Up, and CBS - have documented repeated instances of such content slipping through. In one case, CBS found explicit celebrity deepfakes being promoted on Instagram even after flagged apps had been removed.
"This is an adversarial space in which the people behind it - who are primarily financially motivated - continue to evolve their tactics to avoid detection," Meta stated. "Some use benign imagery in their ads to avoid being caught by our nudity detection technology, while others quickly create new domain names to replace the websites we block."
Meta said it is deploying new AI-based tools trained on terms, phrases, and emoji patterns typically used in nudify ads. It has also partnered with the Tech Coalition's Lantern program to share threat intelligence with other tech firms in efforts to curb the spread of such content.
The lawsuit against Joy Timeline marks Meta's latest escalation in its efforts to address the proliferation of explicit deepfake technologies, which have drawn bipartisan condemnation and spurred new legislation. The recently passed Take It Down Act requires platforms to remove non-consensual explicit content swiftly and makes their distribution illegal.
Meta told CBS it had removed hundreds of the flagged ads and was committed to keeping such content off its platforms.