Deepfake porn could be a growing problem amid AI race

Artificial intelligence can be used to create art and try on clothes in virtual reality.

Deepfake porn could be a growing problem amid AI race

NEW YORK, NY (AP) - Artificial intelligence imaging is being used to create artwork, let users try on virtual clothes or design advertising campaigns.

Experts fear that the dark side of these easily accessible tools will worsen a problem that is primarily harmful to women: deepfake nonconsensual pornography.

Deepfakes refer to videos and images created digitally or altered using artificial intelligence (AI) or machine learning. The first porn made with the technology spread across the web several years ago, when a Reddit member shared clips of the faces of women celebrities placed on the shoulders or porn actors.

Since then, deepfake makers have distributed similar videos and pictures targeting online influencers and journalists. There are thousands of videos on a variety of websites. Some websites allow users to create their images, allowing them to transform anyone into a sexual fantasy without their consent.

Experts say that the problem grew when it became easier to create sophisticated and visually appealing deepfakes. They say the problem could worsen with the advent of generative AI software that can be trained on millions of images found online and generates new content by using the data already available.

Adam Dodge, founder of EndTAB - a group which provides training on technology enabled abuse - said that the reality was that technology would continue to grow, develop, and become as simple as pressing a button. As long as this happens, people will continue to abuse technology, especially through deepfake or fake nude photos and online sexual violence.

Noelle Martin from Perth, Australia has lived this reality. She found it 10 years ago, when she searched for an image on Google. Martin claims that to this day she does not know who made the fake videos or images of her having sexual relations. She believes someone took a photo posted on her Facebook page or somewhere else and turned it into porn.

Some took it down, but she quickly found it again.

Martin said, "You can't win." This is something that will always be there. It's like it has forever ruined you.

She said that the more she spoke up, the worse the problem became. She said that some people told her her social media posts and the way she dressed contributed to the harassment.

Martin shifted her focus to legislation and advocated for a law in Australia which would fine companies $555,706 if they didn't remove such content as directed by online safety regulators.

It's impossible to control the internet when each country has its own laws, even for content created halfway across the globe. Martin, a legal researcher and attorney at the University of Western Australia says that she believes the issue must be addressed through a global solution.

Some AI models claim they have already begun to restrict access to explicit images.

OpenAI claims to have removed explicit content in data used to train DALL-E's image generator, which has limited the ability of users of the tool to create these types of images. The company filters requests, and claims to prevent users from creating AI pictures of politicians and celebrities. Midjourney is another model that blocks certain keywords and encourages the user to flag problematic images.

In November, Stability AI released an update that removed the ability to generate explicit images with its image generator Stable Diffusion. These changes were made after reports claimed that users had been creating nude images of celebrities using the technology.

Motez Bishara, spokesperson for Stability AI, said that the filter uses a combination keywords and other methods like image recognition to detect nudity. It then returns a blurred picture. The company has released its code for the public to use. Users can manipulate the software to generate whatever they desire. Bishara stated that Stability AI license extends to applications developed by third parties based on Stable Diffusion and prohibits any misuse of the software for illegal or immoral reasons.

Some social media platforms have tightened their rules in order to protect their platforms from harmful material.

TikTok announced last month that all deepfakes, or manipulated content, that shows realistic scenes must have a label to indicate that they are fake or altered. Deepfakes involving private figures or young people will no longer be allowed. The company previously banned sexually explicit material and deepfakes which misled viewers about real events and caused harm.

Twitch, the gaming platform, has also updated its policy on explicit deepfake pictures after it was discovered that a popular streamer called Atrioc had a deepfake website open in his browser during a late-January livestream. The site displayed phony pictures of Twitch streamers.

Twitch has already banned explicit deepfakes. But now, showing a peek at such content - even if the intention is to express outrage - 'will result in a enforcement', wrote the company in a post on its blog. The company also said that intentionally sharing, creating or promoting the material would result in an immediate ban.

Deepfakes have been banned by other companies, but it takes diligence to keep them out.

Apple and Google removed an app that ran sexually suggestive deepfake video of actresses in order to promote the product. Deepfake porn research is rare, but a report by DeepTrace Labs in 2019 found that it was almost exclusively weaponized against women, and that the majority of the videos were featuring western actresses.

Meta's platform includes Facebook, Instagram, and Messenger. The app that was removed by Google, Apple, and Microsoft had ads running on it. Meta spokesperson Dani Lever stated in a press release that the company's policies restrict both AI-generated adult content and non-AI-generated adult content. It has also restricted the app page's ability to advertise on its platforms.

Take It Down is an online tool that was launched in February by Meta and adult sites such as OnlyFans, Pornhub and OnlyFans. It allows teens to report images and videos on the Internet. The site allows users to report both regular images and AI-generated material, which is a growing concern among child safety groups.

When people ask us what we are worried about, our senior leaders will answer: "What are the boulders that are coming down the mountain? First, end-to-end encrypted communication and the implications for child safety. Second, AI and deepfakes', said Gavin Portnoy. He is a spokesperson for National Center for Missing and Expended Children, the company that operates Take It Down.

Portnoy stated that he had not yet been able "to formulate a direct answer to this issue."