A call is growing for user-centred policy design to secure competitiveness in generative AI. Expanding adoption in related industries and the public sector is not enough, it said, and the base of actual users must grow. But legislation related to safety and protection, which users cite as an obstacle to broader AI use, remains insufficient.
The Korea Information Society Development Institute (KISDI) published a report titled "An Exploratory Study on Antecedents to the Adoption of Generative AI Services" with those findings. The study, led by research fellow Seonghee Joo (주성희), analysed generative AI usage behaviour and policy demand based on surveys of 2,000 members of the public and 34 experts.
The report said expanding the base of users who actually use generative AI is essential, in addition to adoption and diffusion in related industries and the public sector, to secure competitiveness in the field.
The survey found 62 percent of respondents were using generative AI. Among users, 25.3 percent used it daily and 37.2 percent used it 2 to 3 times a week. ChatGPT was the most used service at 90.6 percent, followed by Gemini at 43.6 percent. The most common purpose was "addressing personal curiosity and exploring knowledge" at 49 percent.
Trust lagged behind usage. Only 44 percent agreed with the statement, "It provides reliable outputs." Some 33.8 percent of respondents said they had directly experienced negative incidents such as generating false information, copyright infringement or leaks of sensitive information. Statistical analysis also consistently showed that "reliability" was a factor that reduced the frequency of generative AI use.
When asked to select the top 1 to 3 policy needs for promoting generative AI, "strengthening data protection and ethical rules" ranked first on a combined top-three basis at 50.6 percent. Some 19.8 percent of all respondents chose it as their top priority. "Developing technical supplementary measures to reduce side effects and risks" followed at 18.4 percent, and "strengthening regulation and punishment for misuse and abuse" was next at 17.7 percent. By contrast, responses for supply-side policies such as "supporting corporate R&D" stood at 0.1 percent.
By age group, respondents in their teens, 50s and 60s ranked "technical supplementary measures to reduce side effects and risks" as their top priority, while those in their 20s and 30s had a relatively higher share selecting "strengthening regulation and punishment for misuse and abuse." The lower the age, the more respondents agreed with policies related to "personal data sovereignty and compensation."
The report said technological progress did not appear among the policy priorities that members of the public consider for generative AI.
On whether the government's response on AI policy was appropriate, negative assessments at 26.4 percent exceeded positive assessments at 23.2 percent. A majority of 50.5 percent answered "average." The mean response was 2.95 points, slightly below the neutral point of 3 points.
On whether citizen opinions are sufficiently reflected in the AI policy decision-making process, negative responses at 32 percent were 12.7 percentage points higher than positive responses at 19.3 percent. The mean response was 2.84 points.
Experts showed a similar assessment. Experts from academia and civil society evaluated the Basic Act on the Development of Artificial Intelligence and Establishment of a Trust Foundation, enacted in 2025, as being tilted toward industrial promotion, with user protection and ethical safety devices relatively insufficient. They called for predictable, detailed guidelines and concrete user protection measures.
FACTBOX - Supplementary bills keep coming, but few target user protection directly
In the 22nd National Assembly, supplementary legislation has followed after the AI Framework Act took effect. But only a few bills directly target strengthening data protection and ethical rules, which users cited as the most needed.
According to the National Assembly's bill information system, 9 AI-related bills have been proposed this year. Of those, 2 bills are directly tied to user protection. A partial revision to the AI Framework Act, introduced by Chun-saeng Chung (정춘생) of the Rebuilding Korea Party as the lead sponsor, would require AI services to notify users that outputs in specialised fields such as medicine, law and finance cannot replace expert judgement, and would impose protective technical measures for services aimed at minors. A separate bill by Sang-hwi Lee (이상휘) of the People Power Party would ban damaging, forging or altering AI output labels and impose administrative fines of up to 30 million won.
The remaining bills take a different direction. Three special bills aimed at supporting the construction and operation of data centres were proposed by lawmakers Chungkwon Park (박충권), Jang-gyeom Kim (김장겸) and Haemin Lee (이해민). Most legislation focuses on industrial promotion or infrastructure support, including strengthening the supply system for training data (Hyun Kim (김현)), supporting AI education and literacy (Young Heo (허영) and Woo-young Kim (김우영)), and including a basic plan to respond to changes in the employment environment (Eun-seok Choi (최은석)).
The gap between user demand and legislative direction is not narrowing, it said. The report said the social diffusion of generative AI is closely linked to user experience, trust and protection systems, and urged demand-driven policy design.