AI & Enterprise
Alibaba AI launches Qwen3.5-Omni with vibe coding and real-time response
Alibaba’s AI research unit Qwen·Tongyi Lab has unveiled Qwen3.5-Omni, an omni-modal model covering text, image, audio and video understanding as well as speech generation. The lab highlighted real-time responses and long-input processing, with a maximum sequence length of 256,000 and support for up to 10 hours of audio input. It also disclosed training data and a dual mixture-of-experts structure, and introduced three product variants with offline and real-time APIs.