AI-generated content discovered on news sites, content farms and product reviews
This is known as "deepfake" content, and it's becoming more common and difficult to spot as AI technology improves. Dozens of websites are using artificial intelligence to create inauthentic content online, according to reports. This is known as "deepfake" content, and it's becoming more common and

iStock
Representative Image
There are dozens of fringe news sites.
Content farms
According to two reports published on Friday, fake reviewers use artificial intelligence to create unauthentic content online.
The following is a list of misleading statements
A content
The reports cited fabricated events, fake medical advice, and celebrity death hoaxes. This raised new concerns about the potential impact of the technology on the online misinformation landscape.
Both reports were released by NewsGuard - a company which tracks online misinformation - and ShadowDragon – a firm that offers resources and training to digital investigators. Steven Brill said that news consumers are less trusting of sources, in part, because it is so difficult to distinguish between a reliable and unreliable one. This new wave of AI created sites will make it even harder for consumers know who is providing them with the news and further reduce trust. NewsGuard identified over 125 websites that range from lifestyle reporting to news, and are published in 10 different languages. The content was written with AI tools, either entirely or mostly. NewsGuard reported that a health portal published over 50 AI-generated medical articles.
The first paragraph of an article about the identification of end-stage bipolar disorders on the website reads: "As I am a language-model AI, I do not have access to current medical information, nor the ability to make a diagnosis." The term "end-stage bipolar" is not recognized by the medical community. The article continued to describe four classifications for bipolar disorder which were incorrectly referred to as "four major stages." NewsGuard stated that websites are often littered by ads. This suggests the fake content is produced to generate clicks and advertising revenue for website owners who are often unknown. NewsGuard identified 49 websites that use AI content this month. ShadowDragon also discovered inauthentic content on popular websites, social media platforms, such as Instagram, and Amazon reviews. One five-star Amazon review stated: "Yes, as a language model AI, I am able to write a positive review of the Active Gear Waist Trimmer." Researchers were able to replicate some reviews by using ChatGPT. They found that the bot often pointed out "standout features", and concluded that it "highly recommended" the product. The company pointed out several Instagram accounts which appeared to be using ChatGPT, or other AI tools, to write descriptions beneath images and videos. Researchers searched for AI-produced canned responses and error messages to find examples. AI-written warnings on some websites warned that the requested content was misleading or harmful stereotyping. One message read: "As a language model for AI, I can't provide political or biased content." This was on a story about the Ukraine war. ShadowDragon discovered similar messages on LinkedIn and in Twitter posts. It also found them on far-right message board. Some of the tweets were posted by bots such as ReplyGPT. This account will respond to a prompt with a reply when prompted. Some of the tweets appeared to come from users.