Kim Myung-joo, head of the AI Safety Research Institute, presents at a meeting to gather opinions on the 'national AI safety ecosystem master plan' at the Korea Federation of Banks building in Seoul on Feb. 6. [Photo: Digital Today]

As the government pushes to draw up a 'national artificial intelligence (AI) safety ecosystem master plan', a call emerged to more clearly define the rights of people harmed by AI and the procedures for responding. The argument is that the debate should move beyond technology-focused safety discourse and set out rights guarantees and remedy procedures that victims can actually feel.

The Ministry of Science and ICT held a meeting on Feb. 6 at the Korea Federation of Banks building in Jung-gu, Seoul, to gather opinions on the national AI safety ecosystem master plan. The meeting was arranged to build consensus on creating a safe AI environment and to hear a range of views from academia, industry and civil society.

The government is currently preparing the AI safety ecosystem master plan in cooperation with academia and industry. A draft includes measures to build systems for a safe AI-use environment, such as developing AI safety agents, providing AI risk response and prevention information by linking relevant agencies, and building an AI safety portal.

Kim Myung-joo (김명주), head of the AI Safety Research Institute, said he would establish AI safety evaluation and legal and institutional standards and open the institute's testbed infrastructure for safety inspections to companies and researchers. He also presented plans to build at least 3 AI safety convergence centres in connection with universities and to foster more than 150 multidisciplinary convergence talents.

In the discussion, questions continued over institutional effectiveness. Lee Ji-eun (이지은), a senior activist at Participatory Solidarity, stressed that "when AI safety-related risks occur, a system is needed for how those at risk can actually resolve the issue and receive help".

Lee argued that beyond macro plans such as building portals and training talent, an AI 피해 relief system should be created that the public can easily access. She also stressed that as automated decision-making systems such as AI agents spread across society, the risk-response process should also be made more concrete.

Lee referred to the AI Basic Act that took effect in January and said, "The law includes the concept of 'affected persons', but it does not sufficiently spell out what specific harms may occur and what rights and remedy means are guaranteed."

As a specific example, she cited AI hiring systems at public institutions. She said it is hard for unsuccessful applicants to get an explanation of why they were rejected and by what criteria they were evaluated, and that procedures for raising objections are also unclear. She stressed the need to build a tighter safety net.

The ministry plans to reflect the views presented at the meeting in the master plan and announce specific details in the first half of the year.

Keyword

#Ministry of Science and ICT #AI Basic Act #AI Safety Research Institute #AI safety portal #Participatory Solidarity
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.