Some users of OpenAI’s GPT-4 model are unsatisfied with its recent performance, claiming that it has degraded, becoming “lazier” and “dumber,” in contrast to the thrill it formerly provided.
The users complained about GPT-4 exhibiting weaker reasoning, giving inaccurate replies, difficulty recalling knowledge, difficulty following directions, and forgetting fundamental code syntax. Users were also irritated by GPT-4’s repetitive output of code and information loops. Some users have compared the experience of relying on GPT-4 to coding functions for a website to driving a luxurious car that suddenly turns into a run-down truck.
The issue appears to be related to OpenAI’s recent decision to switch to a mixture of experts (MOE) model, which is designed to improve performance and reduce costs. However, some experts believe that the MOE approach may be having a negative impact on GPT-4’s accuracy.
In a podcast, George Hotz, a computer scientist and entrepreneur, described how OpenAI is using an eight-way mixture model for GPT-4. This means that the model is made up of eight smaller models, each of which is trained on a different task or subject area. When a user asks a question, the MOE model decides which of the smaller models to send the query to.
The sources for this piece include an article in BusinessInsider.