OpenAI disclosed details of its agreement with the U.S. Defense Department and stressed AI safety safeguards, TechCrunch reported on Saturday.
OpenAI CEO Sam Altman (샘 알트먼) acknowledged the agreement was pushed through quickly and said it could look bad from the outside.
He said OpenAI put layered safeguards in place to prevent AI from being used for mass surveillance, autonomous weapons or high-risk automated systems. He said the company secured safety through cloud deployment, internal staff involvement and strong contractual protections.
OpenAI said the agreement with the Defense Department specifies that its AI models will not be used for mass surveillance, autonomous weapons systems or social credit systems. It said this differs from other AI companies that have weakened or removed safeguards.
But Mike Masnick (마이크 매스닉) of Techdirt said the agreement could still allow domestic surveillance. He argued that the contract specifies compliance with Executive Order 12333, which he said is how the U.S. National Security Agency collects domestic information overseas.
In response, Katrina Mulligan (카트리나 멀리건), OpenAI's head of national security partnerships, said deployment architecture matters more than contract clauses. She said OpenAI deploys only through a cloud API so AI is not directly integrated into weapons systems.
Altman said the agreement moved quickly but was intended to ease tensions between the industry and the government. He said if the deal becomes a catalyst for easing tensions between the government and the AI industry, OpenAI would be seen as a company that endured pain for the industry.