Download the app
← Latest news

OpenAI pays 25000 for GPT-5.5 jailbreaks and safety bypasses in new bounty challenge

Technology
Published on 24 April 2026
OpenAI pays 25000 for GPT-5.5 jailbreaks and safety bypasses in new bounty challenge

The bounty asks for universal prompts, not just one hack

OpenAI has launched a “bio bug bounty” offering $25,000 to vetted security researchers who can bypass safety guardrails on its latest model, GPT-5.5. The program aims to identify universal jailbreak prompts and expand external adversarial testing, signaling a more open, researcher-driven approach to stress-testing AI safety.

  • OpenAI offers $25,000 via a “bio bug bounty” for GPT-5.5 jailbreaks
  • Researchers must bypass safety guardrails using jailbreak prompts
  • The focus is on universal prompts, not isolated vulnerabilities
  • The move increases external adversarial testing for AI safety
Read the full story at The Economic Times

This summarization was done by Beige for a story published on The Economic TimesThe Economic Times

The full experience is on mobile.

Swipe through stories, personalise your feed, and save articles for later — all on the app.