Downranking won’t stop Google’s deepfake porn problem, victims say

After backlash over Google’s search engine becoming the primary traffic source for deepfake porn websites, Google has started burying these links in search results, Bloomberg reported.

Over the past year, Google has been driving millions to controversial sites distributing AI-generated pornography depicting real people in fake sex videos that were created without their consent, Similarweb found. While anyone can be targeted—police already are bogged down with dealing with a flood of fake AI child sex images—female celebrities are the most common victims. And their fake non-consensual intimate imagery is more easily discoverable on Google by searching just about any famous name with the keyword “deepfake,” Bloomberg noted.

Google refers to this content as “involuntary fake” or “synthetic pornography.” The search engine provides a path for victims to report that content whenever it appears in search results. And when processing these requests, Google also removes duplicates of any flagged deepfakes.

Google’s spokesperson told Ars that in addition to lowering the links in the search results, Google is “continuing to decrease the visibility of involuntary synthetic pornography in Search” and plans to “develop more safeguards as this space evolves.”

“We’ve been actively developing new protections in Search to help people affected by this content, building on our existing policies,” Google’s spokesperson said.

Over the past year, though, some victims have criticized Google’s process for delisting deepfakes as cumbersome. Kaitlyn Siragusa, a Twitch gamer with an explicit OnlyFans, told Bloomberg in 2023 that as a frequent target of deepfakes, she found that requesting delisting of every deepfake link is “a time and energy draining process.”

“It’s a constant battle,” Siragusa said.

In response to Google downranking deepfakes, Carrie Goldberg—an attorney who represents victims whose sexual materials spread online without consent—told Bloomberg that Google’s latest update does “the bare minimum.”

For victims, the bottom line is that problematic links will still appear in Google’s search results for anyone willing to keep scrolling, and intentionally searching “deepfake” will point you to the most popular sites that can then be searched.

So although US-based search traffic to the most popular two deepfake porn sites has decreased by as much as 25 percent “in the first 10 days of May compared with the prior six months’ average,” Bloomberg reported, victims are still struggling to work with Google to stop the spread.

“Google has ruined lives and is shamefully inflexible at responding to developing problems,” Goldberg said, joining others decrying Google’s move as “too little, too late.”

Google’s solution still largely puts the onus on victims to surface deepfakes and request removals, seemingly mostly because Google only proactively removes web results when content directly conflicts with the law or its content policies.

That would include removing child sexual abuse materials, spam, or personal information that could be used to dox someone. But deepfake porn remains in a legally gray area, and because of that, anyone can use the world’s most popular search engine to surface AI-generated non-consensual intimate imagery whenever they want. And while downranking may make it harder to surface deepfake porn for casual web surfers, it also arguably makes it harder and even more time-consuming for victims stuck searching for their deepfakes to report and delist them.

Some states have banned deepfakes, but so far, there is no federal law that could push Google to be more proactive in delisting links.

That could change if a high school victim, Francesca Mani, gets her way. She’s supporting the “Preventing Deepfakes of Intimate Images Act,” which seeks to criminalize deepfake porn and imposes damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

That legislation was introduced in the US House of Representatives this month and has been referred to the judiciary committee, which has jurisdiction over law enforcement agencies.

“I’m here standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did,” 14-year-old Mani said when the legislation was introduced.

While spreading deepfake porn exacerbates harms, some think it should be a crime to create harmful images in the first place. The United Kingdom is weighing legislation imposing an “unlimited fine” and possible jail time just for creating deepfake images, even if someone never plans to share the images, Time reported.

Until there’s legal clarity for deepfake victims, the legal basis for requesting delisting from Google search results remains “shaky,” Nina Jankowicz, a disinformation researcher and deepfake victim, wrote in The Atlantic.

For victims, it seems like no solution goes far enough to end ongoing deepfake harassment.

According to Jankowicz, passing the federal law “won’t solve the deepfake problem,” because “the Internet is forever,” “AI grows more powerful by the month,” and users generating this content are “remarkably nonchalant about the invasion of privacy they are perpetrating.”

“As policy makers worry whether AI will destroy the world, I beg them: Let’s first stop the men who are using it to discredit and humiliate women,” Jankowicz wrote.

On top of adapting laws and reducing spread online, a culture shift is needed to end the abuse, Jankowicz suggests. As it stands now, deepfake porn “has become a prized weapon in the arsenal misogynists use to try to drive women out of public life,” Jankowicz wrote.

Leave a Reply

Your email address will not be published. Required fields are marked *