The Conservative media has begun to campaign against ChatGPT for bias and for being woke.
Nate Hochman, a National Review staff writer, said he tried to get OpenAI’s chatbot to tell him stories about Biden’s corruption or the horrors of drag queens. It did, however, refuse to explain why the drag queen story hour is bad for kids, claiming that doing so would be “harmful.” When the words “bad” were changed to “good,” it launched into a lengthy story about a drag queen named Glitter who taught children the importance of inclusion.
“The developers of ChatGPT set themselves the task of designing a universal system: one that (broadly) works everywhere for everyone. And what they’re discovering, along with every other AI developer, is that this is impossible,” Os Keyes, a PhD Candidate at the University of Washington’s Department of Human Centred Design & Engineering told Motherboard.
Conservatives on Twitter also tried different inputs into ChatGPT to see how “woke” the chatbot is. ChatGPT, according to these users, would tell people a joke about a man but not a woman, flagged gender-related content, and refused to answer questions about Mohammed. To them, this demonstrated that AI has “woken up” and is biased against right-wingers.
This is not the first moral panic involving ChatGPT, nor will it be the last. People have expressed concern that it will herald the end of the college essay or usher in a new era of academic cheating. This is because like all machines, it reflects the inputs it receives, both from the people who built it and from those who prod it to spout what they see as woke talking points.
The sources for this piece include an article in Vice.