Tesla's AI computing optimisation technology is likely to change the paradigm of AI chip design. [Photo: ChatGPT-generated image]

[DigitalToday reporter Jinju Hong] Tesla has unveiled a new technology that enables high-performance artificial intelligence (AI) computing even in low-power environments. The technology was first revealed through the recently published patent "US20260017019A1", and Tesla-developed "Mixed Precision Bridge" is at its core.

On Jan. 17 (local time), blockchain media outlet Cryptopolitan reported that the technology aims to deliver up to 40 times higher performance than existing hardware by combining energy-efficient 8-bit computing with high-precision 32-bit computing (Rot8). It is seen as a core foundational technology for the AI5 processor Tesla is preparing as its next-generation AI chip.

Mixed Precision Bridge is particularly important for the humanoid robot Tesla Optimus, which has a battery capacity of only about 2.3 kWh. If it performs 32-bit GPU computing using existing methods, AI inference alone consumes more than 500 watts over 4 hours, but Tesla explained it reduced that to under 100 watts. This enables Optimus to work continuously for more than 8 hours without overheating issues.

Tesla also said the technology improves the so-called "memory loss problem" that occurred in FSD (Full Self-Driving). Previously, if a stop sign was briefly obscured by a large vehicle, the AI would forget it, but the new Mixed Precision Bridge maintains a long context window and high positional resolution, enabling the AI to remember information from more than 30 seconds earlier as accurate 3D coordinates. Tesla said that by using RoPE (rotary position encoding), the sign's location is stably maintained in the vehicle's "mental map".

The patent also includes a Log-Sum-Exp approximation technique. This allows wide dynamic-range audio processing, from quiet sounds to loud sounds, using only an 8-bit processor, enabling the vehicle to recognise its surroundings with 32-bit-level precision.

Tesla also makes active use of quantization-aware training (QAT). Rather than shrinking a 32-bit model later, the strategy is to train AI from the start under 8-bit constraints to minimise performance degradation even on lower-spec hardware.

By integrating these technologies directly into silicon, Tesla has laid the groundwork to become independent of Nvidia's CUDA ecosystem. This could also enable a dual-foundry strategy that uses Samsung Electronics and TSMC at the same time.

Separately, xAI, led by Elon Musk, has become the first company to officially start operating an AI training cluster at the gigawatt (GW) scale. That exceeds San Francisco's peak electricity demand, and it is seen as having already entered large-scale operations while rivals discuss 2027 roadmaps. The industry also expects xAI could become a strong competitor to OpenAI's "Stargate" project, scheduled for release in 2027.

Keyword

#Tesla #US20260017019A1 #Optimus #FSD #xAI
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.