OpenAI’s ChatGPT has been criticized for a decline in its accuracy for math, code generation, problem solving, and sensitive questions since March. A study by Stanford University and UC Berkeley found that GPT-4, the underlying technology for ChatGPT Plus, has performed worse than its predecessor, GPT-3.5, on these tasks.
In response to these criticisms, OpenAI has introduced a new feature called “custom instructions.” This feature allows users to add specific requirements to their prompts, which will be considered in every conversation going forward. This is intended to help users get the responses they need from ChatGPT, even if the model’s performance has declined.
Some experts like Simon Willison, a former Google engineer, finds some aspects of the paper unconvincing. While Arvind Narayanan, a computer science professor at Stanford University, emphasizes that the model’s capabilities and behavior are distinct aspects.
Hence, experts suggest that refining prompts can help address some of GPT’s issues. By using specific, context-rich, and detailed prompts, users can achieve better responses and minimize the impact of the recent decline in performance. Online courses focused on prompt optimization and understanding language model training can aid in creating more effective prompts.
The sources for this piece include an article in AnalyticsIndiaMag.