A representative AI model of South Korea. [Photo: Shutterstock]

A project to build a domestic AI foundation model has entered a second round with four teams, drawing attention to how the “originality” criteria that strongly affect evaluations will be set.

In the second round, three existing teams — LG AI Research, SK Telecom and Upstage — will develop from January to June. Newly added Motif Technologies will develop from February to July. All four teams will undergo a second-stage evaluation around early August. Detailed criteria related to originality have not yet been released.

The Ministry of Science and ICT presented three originality criteria in the first round. It did not recognize derivative models that simply fine-tuned overseas AI models. It also did not recognize a method that uses external encoder weights in a “frozen” state without resetting them. It required independent development across the full cycle, from model design to pre-training, from scratch.

In Naver Cloud’s case, it used video and audio encoder weights from Alibaba’s Qwen model without resetting them and did not pass the first-round threshold. Jeong Haedong (정해동), a project manager at the Institute of Information & Communications Technology Planning & Evaluation (IITP), said at the time, “The issue was that it used the encoder in a frozen state as is. There was an internal judgment that it is difficult to recognize such a method as an independently developed model.”

The ministry has a policy of clarifying the criteria in the second round, but has not disclosed specific details so far.

Ryu Jaemyung (류제명), second vice minister at the Ministry of Science and ICT, said right after the first evaluation, “We plan to gather opinions from academia, industry and experts on from-scratch development and make differentiation and scoring more specific.” He added, “In the second stage, we will minimize uncertainty from the starting line.” Kim Kyungman (김경만), head of the AI policy office, said at a related briefing, “We will have deeper discussions with the four elite teams on how far it is right to assess originality and where the key issues are.” He said it would also gather opinions from industry and academic experts.

But participating companies that need to speed up development say the criteria need to be set more quickly.

According to the industry, companies participating in each consortium began development by combining independently held technologies into the final model, but their specific roles and scope of contribution have not yet been finalized. An official at a participating company said, “With the criteria not specified, there are uncertain parts in setting the development direction.” Development has started with it unclear whether each participant’s technology conflicts with the criteria while the originality standard itself remains vague.

The scope of open-source use is also an issue. The government’s position is that “using open source itself is common, but using completed weights as is cannot be seen as a domestic model,” but the boundary is unclear at development sites. A ministry official said on the open-source standard, “We have already stated the minimum standard for originality,” but maintained the position that detailed criteria will be finalized later.

Keyword

#Ministry of Science and ICT #LG AI Research #SK Telecom #Upstage #Qwen
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.