Analysis
Anthropic's brilliant transparency shines through their detailed postmortem regarding a seven-week period of quality fluctuations in Claude Code. By openly identifying how three independent issues temporarily affected 推論 (Inference) and context handling, the team provides an exciting masterclass in system accountability and continuous improvement. This swift resolution and proactive communication ultimately build tremendous trust and pave the way for even more robust Large Language Model (LLM) performance!
Key Takeaways
- •Three independent, overlapping issues temporarily affected Claude Code's thinking power and context retention between March 4 and April 20, 2026.
- •The root causes included a slight reduction in default 推論 (Inference) effort, an 推論 (Inference) cache clearing bug, and a system prompt word limit.
- •All issues were successfully resolved by April 20, and Anthropic retroactively reset usage limits for affected users by April 23 as a fantastic customer-first move!
Reference / Citation
View Original"The official postmortem summarizes that independent three causes overlapped with different timings to cause quality degradation."