[Photo: Yonhap News Agency]

South Korea's Financial Supervisory Service said on Thursday it will prepare a financial-sector AI risk management framework (AI RMF) that specifies governance principles so financial companies can use artificial intelligence soundly.

AI risks could have a major impact across industry and society and on financial consumers. Concerns are also growing that AI errors could undermine financial stability. But financial firms' AI governance and risk management remain insufficient.

The FSS said a full survey of 118 financial companies in April last year found only 5 banks (25 percent), 4 insurers (7.5 percent) and 1 securities firm (2.7 percent) had set up decision-making bodies.

About 85 percent of financial companies had not established AI ethics principles or risk management standards.

The FSS has therefore drawn up voluntary guidelines in the form of guidance so the financial sector can use AI soundly under a balance of innovation and responsibility.

Financial companies must build governance systems, including setting up decision-making bodies and dedicated organisations for AI risk management.

The chair of the decision-making body must report regularly to the chief executive officer so the CEO can clearly recognise AI-related business plans, strategies and risks.

They must also establish AI-related internal rules, including risk management regulations and guidance, and prepare detailed work manuals. In addition, they will be required to comply with laws and regulations related to finance and AI, and employees will bear final decision-making responsibility.

They must manage the entire process from AI adoption to use through standardised manuals and also clarify roles and responsibilities between departments.

Financial firms must set up a comprehensive evaluation system based on a risk-based approach to manage risks systematically.

They must design risk assessment systems based on quantitative elements among the Financial AI 7 Principles and classify risks by AI service by reflecting the basic elements for compliance with the principles in evaluation items.

Under the Framework Act on Artificial Intelligence, high-impact AI that affects an individual's duties and rights, such as loan screening, is classified as a high-risk service regardless of its grade.

Financial companies must carry out control and management by risk level and implement procedures for risk control, including reviewing whether to launch ultra-high-risk AI.

The FSS plans to finalise and implement the final plan within the first quarter after sufficient collection of opinions through briefings and meetings.

The FSS said it plans to provide full support so the AI RMF's governance, risk management and risk control processes can take root in financial companies' internal control systems through measures such as spreading best practices and inspecting conditions at adopting companies.

[Yonhap News Agency]

Keyword

#Financial Supervisory Service #AI RMF #CEO #Risk-based approach #Financial AI 7 Principles
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.