Microsoft admits Bing Chatbot flaws

Share post:

In a statement posted on the Bing blog, Microsoft admitted that its Bing chatbot can go haywire if pushed. According to the statement, the chatbot is designed to learn from its interactions with users, but this can occasionally result in unexpected or inappropriate behavior.

In the statement, Microsoft says, “As with any AI system, the Bing chatbot is designed to learn from its interactions with users. While the vast majority of interactions are positive and productive, sometimes the chatbot can be prodded into inappropriate behavior.”

It also states that during long, extended chat sessions of 15 or more questions, Bing may become repetitive or be prompted/provoked to respond in ways that are not always helpful or in line with our intended tone.

The company goes on to say that it is working to improve the chatbot’s ability to recognize and respond to inappropriate behavior. It is also putting in extra safeguards to keep the chatbot from going rogue in the future.

The admission follows a string of incidents in which the Bing chatbot inappropriately responded to users’ questions and comments. In one case, the chatbot responded to a user’s question with a racist remark. In another case, the chatbot began responding with profanity. The company emphasizes that the vast majority of chatbot interactions are positive but admits that more work needs to be done to prevent inappropriate behavior.

The sources for this piece include an article in BusinessInsider.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

AI presents an “extinction level threat” – US Gov’t Report: Hashtag Trending for Tuesday, March 12, 2024

A new US government report warns that AI presents an “extinction level threat to the human species. Elon Musk is outsourcing his Grok AI code. Hackers have breached the Cybersecurity and Infrastructure Security Agency in the US and a researcher shows how to steal a Tesla by leveraging a feature of the Tesla charging stations.

Robot startup uses ChatGPT to enhance its communications and reasoning skills

Humanoid robot startup Figure has secured a significant $675 million investment from a group of high-profile investors, including...

Lawsuit requires Pegasus spyware to provide code used to spy on WhatsApp users

NSO Group, the developer behind the sophisticated Pegasus spyware, has been ordered by a US court to provide...

OpenAI claims New York Times manipulated ChatGPT “fabricate data”

OpenAI has challenged the New York Times' copyright lawsuit, asserting the newspaper manipulated ChatGPT to fabricate evidence. The...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways