- ChatGPT users say that dealing with the bot has become more difficult.
- Now, OpenAI is looking into reports that the robot is “lazier.”
- ChatGPT achieved 1.7 billion users last year.
ChatGPT users are lamenting a recent situation where an AI bot asked them to do their work as if it were their boss or something, prompting OpenAI to investigate.
The company announced Thursday that it is looking into reports that ChatGPT has begun rejecting users’ requests by suggesting they finish tasks themselves or refusing to complete them altogether — while also ceasing to return an out-of-office email from Cabo.
OpenAI, via the ChatGPT account on X, said it was requesting feedback on the model “I’m getting lazier.”
“We have not updated the model since November 11, and this is certainly not intentional,” the company wrote. “Model behavior can be unpredictable, and we are looking forward to fixing it.”
ChatGPT It’s been described as a revolutionary tool for people who prefer to play solitaire at work while at work Outsource their tasks. The bot has achieved as many as 1.7 billion users, For all estimatesSince its launch in November last year. At the time, research showed this ChatGPT It has helped some users become more efficient employees and allowed them to return more high-quality work.
But now, people say they’re being met with ridicule at the robot that was supposed to make them easier life.
For example, Semaphore reported That one startup founder tried to ask the bot to list the days of the week until May 5. He replied that he could not complete the “exhaustive list.” When Business Insider tested this, ChatGPT provided detailed guidance on calculating how many weeks there are between December 9 and May 5 and also provided an answer.
On reddit, Users complain About the A daunting task You can make ChatGPT respond appropriately to assigned tasks by toggling different prompts until you reach the desired response. A lot of the Complaints They focus on ChatGPT’s ability to write code and want the company to return to the original GPT models. Users also said that the quality of responses is declining as well.
Employees have previously attributed some issues to A software bugBut OpenAI said on Saturday that it was still investigating user complaints. in Statement on XShe stressed that the training process can yield different personalities for her models.
“Training chat models is not a clean, synthetic process. Different training even using the same data sets can produce markedly different models in personality, writing style, rejection behavior, assessment performance, and even political bias,” the company wrote.
OpenAI did not immediately respond to a request for comment.
“Web maven. Infuriatingly humble beer geek. Bacon fanatic. Typical creator. Music expert.”
More Stories
Saudi Arabia is seeking to acquire a larger stake in Japanese developers
Litecoin Ranks 2nd in Gains: Is It Possible for LTC to Hit $80 Now?
What gas stations are out of fuel in Lee County?