Why Mithun keeps on saying “I leave” and “I am removing this project,” Google points to a looping issue
Updated on: August 11, 2025 03:46 PM IST
Gemini is producing self -critical and defective answers. Google attributes it to an infinite loop bug and is working on a fix.
Gemini of Google is working strangely in recent months, users share tapes that less reads like useful answers and are more like a spiral of self -doubt. Commercial insider The pattern flagged and Google has since accepted a bug. Behavior involves apologizing in the ends, declaring tasks impossible and even threatening to wipe the projects. It seems dramatic. It is also a reminder that modern AIs can be wrong in the ways that feel uncomfortable human beings.
What users are seeing and what google says
The developers on Reddit and X reported conversations, where Mithun fixed on his errors, cursed the code, or announced that it would remove the work and step aside for a better assistant. A founder said that he saw calm, after switching to the Gentler Prompt, a more useful output that encourages the model rather than pursuing it. This reaction can say more about alignment quirks than mood. Nevertheless, tapes created a thread of concern that the system was stuck in shame instead of solving the problem.
The public line of Google is that there is a annoying infinite loop bug behind behavior and it is a fix. The company says that there is not a bad day in the model, which is a code, a good way to emphasize emotions. Nevertheless, this episode highlights a real problem for the sector. These systems are large, their internal functioning are opaque, and small quick changes can produce external swings in tones and materials. When a model is helpful in some messages, the users feel directly the whiplash.
The broader reference is not new. Large language models still hay facts, mirror user tones and adapt to immediate agreement. That mixture can produce odd results. We have seen a lot of chatbots flatter users, releasing grand claims about the tasks that they cannot fulfill, and push back with funny ways. When a system bends into self defects, the effect is unstable because it reads as a human crisis. In practice, this pattern is matching and reinforcement has gone wrong.
For developers and teams that use these devices in production, railing matter. The clear system indicates, the rate limits on forgiveness, and the decline say that the reset may place reference sessions on the track. Human reviews are important in sensitive domains. For everyday users, a simple reset often helps. If it fails, the provider is on the provider to patch the loop and publish changes. The lesson is simple. AI that seems less strange is not a beauty goal. It is a security and reliability goal. If Mithun’s team resolves the loop quickly and explains what has changed, then the trust rebound. If not, people will notice and they will switch equipment. This is the market at work, not self -sage.