Download the app
← Latest news

Miami startup Subquadratic claims 1000x AI efficiency with subquadratic LLM researchers demand independent proof

Technology
Published on 6 May 2026
Miami startup Subquadratic claims 1000x AI efficiency with subquadratic LLM researchers demand independent proof

The core math claim would collapse long context costs

Miami startup Subquadratic says its SubQ 1M-Preview LLM finally escapes the quadratic attention cost that has constrained major models since 2017. It claims up to 1,000x attention-compute reductions and launches an API, coding agent, and search tool after a $29 million seed. But researchers question cherry-picked benchmarks and missing pricing, calling for independent validation.

  • Subquadratic claims linear scaling via fully subquadratic attention architecture
  • It reports massive efficiency gains, especially at 1M-token contexts
  • Critics flag narrow benchmarks, single-run testing, and unexplained research-to-production gaps
  • Prior long-context claims from similar startups make the independent-proof threshold higher
Read the full story at Venture Beat

This summarization was done by Beige for a story published on Venture BeatVenture Beat

The full experience is on mobile.

Swipe through stories, personalise your feed, and save articles for later — all on the app.