Download the app
← Latest news

Anthropic Confirms Claude Degradation Was Triggered by Harness and Prompt Changes, Not Model Weights

Technology
Published on 24 April 2026
Anthropic Confirms Claude Degradation Was Triggered by Harness and Prompt Changes, Not Model Weights

Three “small” product layer tweaks quietly altered reasoning

Developers reported “AI shrinkflation” as Claude appeared less capable, more repetitive, and less efficient with tokens. Anthropic’s technical post-mortem says the model weights didn’t regress, but three surrounding product-layer changes did: a reasoning-effort default, a caching bug that wiped thinking too often, and tighter verbosity limits. The company says it has reverted the fixes and reset subscriber usage limits.

  • Anthropic blames quality drops on product-layer changes, not model weights
  • A caching logic bug repeatedly cleared “thinking,” harming short-term context
  • Reasoning effort and verbosity limits reduced performance on complex coding
  • Fixes include reverting changes and resetting subscriber usage limits
Read the full story at Venture Beat

This summarization was done by Beige for a story published on Venture BeatVenture Beat

The full experience is on mobile.

Swipe through stories, personalise your feed, and save articles for later — all on the app.