Other industries
AI drift or is ChatGPT getting dumber?
ChatGPT getting less accurate is something users have been noticing for the past few weeks. While it’s possible that the chatbot is getting “dumber” over time, it could also be simply experiencing AI drift. Let’s look at what we know so far.
No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.
— Peter Welinder (@npew) July 13, 2023
Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.
Could ChatGPT be getting dumber?
OpenAI’s ChatGPT is a large language model (LLM), and as such, it’s trained on a massive dataset of text and code. It works by using a technique called deep learning, a type of machine learning that uses artificial neural networks to learn from data. This means that it’s constantly learning and evolving.
In a recent study, researchers from Stanford and Berkeley found that the response accuracy of both GPT-3.5 and GPT-4 appeared to become worse over time, highlighting the need to keep evaluating LLMs’ behavior.
The study has evaluated responses on a number of different tasks, including math problems, opinion surveys, and visual reasoning. It has then compared GPT-3.5 and GPT-4 responses from March and June.
“For example, GPT-4 (March 2023) was reasonable at identifying prime vs. composite numbers (84% accuracy) but GPT-4 (June 2023) was poor on these same questions (51% accuracy),” the study says.
Credit: How Is ChatGPT’s Behavior Changing over Time? by Lingjiao Chen, Matei Zaharia, James Zou
“We find that the performance and behavior of both GPT-3.5 and GPT-4 varied significantly across these two releases and that their performance on some tasks have gotten substantially worse over time, while they have improved on other problems,” according to the study.
These findings could potentially back the rumors of the LLM getting “dumber”. However, the study also suggests that GPT-3.5 and GPT-4 could be seeing improvements in other areas of performance.
Read also: How Google’s generative AI is shaping the future of content creation
Could ChatGPT be experiencing AI drift?
Artificial intelligence drift happens when sophisticated AI programs and entities, such as chatbots or LLMs, begin to deviate from their original parameters and instructions. This can lead to responses and activities not intended by their human creators.
AI drift is the change in data distribution over time, which can cause a machine-learning model to become less accurate. It can have a significant impact on the accuracy of machine learning models. For example, if a model is not updated to reflect the changes in data, it can start to make inaccurate predictions.
There are two main types of AI drift:
Gradual drift occurs slowly over time as data distribution gradually shifts. This can be caused by changes in customer behavior, the economy, or the law.
Sudden drift occurs suddenly when there is a major change in data distribution. This can be caused by factors such as a natural disaster, a terrorist attack, or a product recall.