Site icon Tech Newsday

ChatGPT gives flawed answers to programmers

A study from Purdue University has found that ChatGPT, a large language model chatbot developed by OpenAI, answered only 48% of programming questions correctly. The study also found that ChatGPT’s answers were often verbose and incorrect, but that many programmers still preferred its answers due to its pleasant, confident, and positive tone.

The study’s authors, Samia Kabir, David Udo-Imeh, Bonan Kou, and assistant professor Tianyi Zhang, say that ChatGPT’s incorrect answers were often due to its inability to understand the underlying context of the question being asked. They also say that ChatGPT’s verbose answers can make it difficult for programmers to identify errors.

The investigation encompassed posing 517 technical queries from Stack Overflow to ChatGPT, in addition to seeking responses from a select group of twelve volunteers. The evaluative metrics extended beyond mere correctness, encompassing factors like consistency, clarity, and brevity.

It also found that correct responses accounted for a modest 48%, nearly 40% of participants favored ChatGPT’s answers, attributing this preference to its comprehensive and eloquent language. Also, when ChatGPT erred outright, a 2 out of 12 participants still favored its responses.

The sources for this piece include an article in TechSpot.

Exit mobile version