Google has signed a contract with the U.S. Defense Department allowing it to use the company’s artificial intelligence models for “any lawful government purpose”. Contract details were classified.
Major foreign media outlets including Engadget reported on April 28 that the deal centers on giving the Defense Department broad access to Google’s commercial AI models and related infrastructure. The contract language sets the scope of use as “any lawful government purpose”. An anonymous source said Google and the Defense Department agreed that domestic mass surveillance or autonomous weapons should be contingent on appropriate human oversight and control.
The problem is that the limitation did not lead to actual contractual controls. The contract reportedly does not give Google the right to control or refuse how the government uses AI. That means a company cannot later put the brakes on how the Defense Department uses it.
Google stressed that it supports national security. A Google spokesperson told Reuters that providing API access to commercial models is a responsible approach that supports national security. Google also reaffirmed its existing position that large-scale surveillance or autonomous weapons use requires appropriate human oversight.
Internal opposition is growing. About 600 Google employees sent an open letter to Chief Executive Sundar Pichai (순다르 피차이), demanding that the company not provide its AI technology to classified military operations. The employees voiced concerns that the technology could be used “in inhumane or extremely harmful ways.”
The letter also reflected concerns that misuse of technology is already threatening lives and civil liberties at home and abroad. The employees said lives have already been lost and civil liberties put at risk through misuse of the technology they are building, and argued that AI systems concentrate power and actually cause mistakes.
With the contract, Google will join OpenAI and xAI in a line-up of companies participating in classified AI projects for the U.S. government. By contrast, Anthropic did not accept a government request to remove safety measures related to weapons and surveillance, and has since been completely excluded from federal government use, the report said.
In this situation, the contract shows big tech AI models rapidly expanding beyond commercial services into national security and military domains. It also suggests that the effectiveness of safety measures, corporate control rights and internal ethical concerns will remain key issues.