OpenAI has allowed the U.S. Defense Department to use its AI, but it is still closer to a testing stage. [Photo: Shutterstock]

[Digital Today Kyung-min Hong (홍경민), intern reporter] OpenAI has joined hands with the U.S. Defense Department to allow its artificial intelligence (AI) to be used in classified environments. The move is expected to have significant repercussions as a sharp shift from its previous stance of restricting military use, but the timing and scope of actual battlefield deployment remain unclear.

Technology Review reported on March 16 local time that OpenAI recently signed an agreement with the U.S. Defense Department, opening the door for its AI to be used in classified systems as well.

CEO Sam Altman has said the technology would not be used directly to develop autonomous weapons, but critics say this amounts only to following relatively relaxed guidelines drawn up by the military itself. Questions have also been raised about the effectiveness of a pledge to limit use for domestic surveillance, citing a lack of clear standards.

Use in combat areas is currently closer to testing possibilities than full-scale adoption. A Defense Department official said discussions are under way on a method in which human analysts enter a list of strike candidates and various information, and the AI synthesises the data and presents priorities. The results must be reviewed by humans, and it has not reached a level where AI independently makes decisions. The possibility has also been raised of combining an interactive AI interface with existing video analysis systems, but whether it will be implemented has not been confirmed.

The technology is also being discussed as potentially applicable to actual combat amid rising tensions with Iran. The approach of analysing massive amounts of text, images and video data at the same time to derive strike priorities is drawing attention because it could support faster decision-making than before. As long as the structure of humans verifying results remains, debate over speed gains and accountability is expected to continue.

Cooperation in drone defence also remains at an early stage. OpenAI is pursuing AI applications to identify and respond to attack drones in cooperation with military drone company Anduril at the end of 2024. The possibility has been raised that OpenAI models could be combined with Anduril's military command-and-control platform, Lattice, but whether it will be deployed and its scope have not been disclosed. The companies say it does not conflict with existing policy because it targets the drones themselves, but debate over the scope of application remains.

In administrative areas, comparatively concrete uses are under way. The Defense Department has introduced generative AI for back-office tasks such as contracts, logistics and procurement through the GenAI.mil platform, and OpenAI models are being used on a trial basis for drafting policy documents and administrative support. This is also limited mainly to non-classified work and is not closely related to actual combat decision-making.

Interpretations of the background to OpenAI's move also differ. Some analyses say it could be a strategy to secure new revenue sources as costs rise from training large-scale models, while others say it reflects Altman's view that democratic countries need access to powerful AI to maintain a competitive edge.

Ultimately, the agreement is seen as an early case for gauging how AI will be introduced across the military domain. So far, its use has remained auxiliary and experimental, but its impact on actual combat and decision-making structures could gradually grow depending on whether technological integration and an expanded scope of application follow. Debate over safeguards and accountability is also expected to expand.

Keyword

#OpenAI #U.S. Defense Department #Sam Altman #Anduril #GenAI.mil
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.