OpenAI has launched a “bio bug bounty” offering $25,000 to vetted security researchers who can bypass safety guardrails on its latest model, GPT-5.5. The program aims to identify universal jailbreak prompts and expand external adversarial testing, signaling a more open, researcher-driven approach to stress-testing AI safety.
Swipe through stories, personalise your feed, and save articles for later — all on the app.