Modulate has unveiled ELM, a voice AI that understands emotion, context and intent, SiliconANGLE reported on Jan. 20.
The company said Modulate's ELM is based on a multilayer architecture that understands voice data, unlike existing large language models (LLMs).
It delivers 30 percent higher accuracy than existing models from OpenAI, Google, DeepSeek and ElevenLabs, and its operating costs are 10 to 100 times lower. The company stressed that ELM captures emotional and contextual elements that text-based AI misses, providing more accurate voice analysis.
Modulate said it designed ELM as it worked to overcome limitations that emerged while developing ToxMod, which monitors in-game voice chat. ToxMod identifies emotion, intent and context to detect bullying and hate speech in real time.