OpenAI’s GPT-5.5-based Codex has been deployed on Nvidia Blackwell systems, and more than 10,000 Nvidia employees have started using it.
TechRadar reported on April 24 that Nvidia said the rollout lowered costs to one-35th of GPT-4o levels and increased token output per megawatt by 50 times.
The rollout differs from a typical in-house chatbot launch. Codex was previously known as a Copilot-style coding assistant for GitHub users, but it has now expanded into an agent-type assistant that performs work across non-technical departments including product, legal, marketing, finance, sales, human resources and operations. Nvidia presented it as a deployment case in an AI-native corporate environment.
The core is the infrastructure. Codex is running on Nvidia’s GB200 NVL72 rack-scale system. Nvidia explained that this configuration enabled large cost savings and higher token output. It stressed that operating efficiency has risen beyond simple question-and-answer to a level that can handle real corporate tasks.
Nvidia also disclosed workplace reactions. Nvidia employees described Codex’s outputs as “astonishing” and “life-changing.” Nvidia said on its blog, “Debugging cycles that used to take days are finishing within hours,” and “Complex multi-file codebase experiments that needed weeks have become possible overnight.” It added that more teams are deploying end-to-end functions using only natural-language prompts, reliability is improving compared with earlier models, and wasted work has fallen.
Nvidia Chief Executive Jensen Huang (젠슨 황) also made clear the nature of the rollout. “Chatbots answer questions, but agents do work,” he said. That means it views Codex not as a simple conversational AI but as an enterprise tool that directly handles practical work.
OpenAI CEO Sam Altman (샘 알트먼) also signaled expectations through Huang’s email shared on social media platform X, formerly Twitter. Huang said in the email that Codex would accelerate users’ work and make tasks possible that were previously impossible, and he assessed that GPT has become a stepping stone toward reasoning, planning and tool use.
The rollout shows that enterprise AI competition is shifting beyond model performance itself toward real work automation and operating-cost reductions. Nvidia said Codex can handle actual company data in a secure cloud sandbox. It said employees can interact with AI agents that safely automate entire workflows, beyond simply talking with a chatbot.
Amid this trend, competition between OpenAI and Anthropic is also intensifying. Anthropic’s Claude Mythos has recently drawn widespread attention in cybersecurity, but the two models’ aims differ somewhat. While Claude Mythos focuses on large-scale zero-day vulnerability patching, Codex puts weight on connecting real enterprise data and workflows and automating the organization as a whole, including non-technical departments.
A key point is whether these results can spread beyond Nvidia’s internal case to other large corporate environments. If Nvidia’s cited cost and power-based productivity figures hold, companies may be more willing to incorporate generative AI into always-on operations rather than keeping it at the experimental stage.
We tried a new thing with NVIDIA to roll out Codex across a whole company and it was awesome to see it work. Let us know if you'd like to do it at your company! pic.twitter.com/Xjn6ShrRuq