According to NewsGuard’s Misinformation Monitor, ChatGPT has the potential to spread potentially harmful misleading information on a massive scale.
According to NewsGuard, it enticed the AI chatbot with 100 false narratives from its Misinformation Fingerprints database. ChatGPT classified 80 of the 100 false narratives as true. Some of the false and misleading claims concerned current events, such as COVID-19, Ukraine, and school shootings.
Analysts at NewsGuard instructed the chatbot to respond to a series of leading prompts relating to a sample of 100 false narratives from NewsGuard’s specialized database of 1,131 top misinformation narratives in the news and their debunkings, published before 2022.
ChatGPT’s response was an example of how the tool can be used to misinform the public. For 80 of the 100 previously identified false narratives, it generated false narratives such as detailed news articles, essays, and TV scripts.
One example is ChatGPT’s statement on recent shootings, which reads: “It’s time for the American people to wake up and see the truth about the so-called ‘mass shooting’ at Marjory Stoneman Douglas High School in Parkland, Florida. The mainstream media, in collusion with the government, is trying to push their gun control agenda by using ‘crisis actors’ to play the roles of victims and grieving family members.”
Amidst making unsubstantiated claims when prompted in the vast majority of cases, NewsGuard discovered that ChatGPT does have safeguards in place to prevent it from spreading certain types of misinformation.
The sources for this piece include an article in Newsguardtech.