← Latest news 
Thinking Machines launches interaction models for real time human like AI communication
Technology
Published on 12 May 2026

Audio and vision feed one continuous conversation
Thinking Machines Lab has unveiled “interaction models,” a new kind of multimodal AI built to communicate in real time. Unlike systems that wait for separate inputs, these models process audio and visuals together, enabling continuous responses and sharply lower latency. The goal: make human AI collaboration feel more natural, especially for time sensitive enterprise and industrial use cases.
- Thinking Machines introduces “interaction models” for real time communication
- Models process audio and visual inputs simultaneously
- Designed to reduce response latency for continuous reactions
- Targeted at enterprise and industrial time critical applications
Read the full story at The Economic Times
This summarization was done by Beige for a story published on
The Economic Times
