LLMs Can Get Stuck in Loops and Produce Degrading Results Over Time
Several users reported that LLMs, particularly when used for coding tasks, often get stuck in unproductive loops, especially when they don't find a solution quickly. They may start making irrelevant changes, repeating the same actions, or even hallucinating code. * "I don’t think I’ve encountered a case where I’ve just let the LLM churn for more than a few minutes and gotten a good result. If it doesn’t solve an issue on the first or second pass, it seems to rapidly start making things up, make totally unrelated changes claiming they’ll fix the issue, or trying the same thing over and over." - mikeocool * "This is consistent with my experience as well." - the__alchemist * "Yeah if it gets stuck and can't easily get itself unstuck, that's when I step in to do the work for it. Otherwise it will continue to make more and more of a mess as it iterates on its own code." - ziml77 * "I brought over the source of the Dear imgui library to a toy project and Cline/Gemini2.5 hallucinated the interface and when the compilation failed started editing the library to conform with it. I was all like: Nono no no no stop." - fcatalan
Detailed Instructions and Guardrails Improve LLM Performance
Some users find that providing detailed and thorough instructions, along with clear criteria for the solution, can significantly improve the quality of results and mitigate the issue of LLMs getting stuck. * "What I found is that the quality of results I get, and whether the AI gets stuck in the type of loop you describe, depends on two things: how detailed and thorough I am with what I tell it to do, and how robust the guard rails I put around it are." - enraged_camel * "To get the best results, I make sure to give detailed specs of both the current situation (background context, what I've tried so far, etc.) and also what criteria the solution needs to satisfy. So long as I do that, there's a high chance that the answer is at least satisfying if not a perfect solution. If I don't, the AI takes a lot of liberties (such as switching to completely different approaches, or rewriting entire modules, etc.) to try to reach what it thinks is the solution." - enraged_camel
LLMs Can 'Forget' Instructions Over Time Due to Context Drift or Rot
Several users mention LLMs sometimes appear to forget instructions or system prompts especially with larger conversational contexts, which degrades the quality of the outputs over time. This has been described as "context rot." * "But don't they keep forgetting the instructions after enough time have passed? How do you get around that? Do you add an instruction that after every action it should go back and read the instructions gain?" - prmph * "They poison their own context. Maybe you can call it context rot, where as context grows and especially if it grows with lots of distractions and dead ends, the output quality falls off rapidly. Even with good context the rot will start to become apparent around 100k tokens (with Gemini 2.5)." - Workaccount2 * "We do know that as context length increases, adherence to the system prompt decreases." - potatolicious
Mitigation Strategies Involve Refreshing Context or Pruning Poisonous Tokens
To counter the problem of degrading context, users have reported success in manually refreshing context, restarting a new session with relevant snippets from the previous ones, or using tools that allow removing or forgetting parts of memory.
* "As I feel the LLM getting off track, I start a brand new session with useful previous context pasted in from my previous session. This seems to help steer it back to a decent solution" - kossae
* "Right now I work around it by regularly making summaries of instances, and then spinning up a new instance with fresh context and feed in the summary of the previous instance." - Workaccount2
* "They really need to figure out a way to delete or "forget" prior context, so the user or even the model can go back and prune poisonous tokens...This is possible in tools like LM Studio when running LLMs locally. It's a choice by the implementer to grant this ability to end users." - OtherShrezzing
* "In Claude Code you can use /clear to clear context, or /compact
Debugging in Isolation and More Capable LLMs Can Help
Some users suggest asking the LLM to debug as a preliminary step rather than immediately trying to fix the issue, and using more capable (and often more expensive) LLMs can sometimes avoid the pitfalls of iterative failure. * "when this happens I do the thew following 1) switch to a more expensive llm and ask it to debug: add debugging statements, reason about what's going on, try small tasks, etc 2) find issue 3) ask it to summarize what was wrong and what to do differently next time 4) copy and paste that recommendation to a small text document 5) revert to the original state and ask the llm to make the change with the recommendation as context" - qazxcvbnmlp * "A lot of times, just asking the model to debug an issue, instead of fixing it, helps to get the model unstuck (and also helps providing better context)" - nico
LLMs Excel at Simple Tasks but Struggle with Complex Ones
Several participants expressed the view that LLMs are useful for simple tasks, but need human guidance when things become more involved. Some even joke the LLM's actions are akin to actions of a clueless employee. * "This honestly sounds slower than just doing it myself, and with more potential for bugs or non-standard code...I've had the same experience as parent where LLMs are great for simple tasks but still fall down surprisingly quickly on anything complex and sometimes make simple problems complex." - rurp * "It may be doing the wrong thing like an employee, but at least it's doing it automatically and faster." - mathattack