Deepfakes and fake news. [Photo: Nanobanana]

A surge in generative AI content is prompting calls for changes to the media review system.

At a forum on institutional reforms to improve the media and content industry, held on Tuesday at the National Assembly in Yeouido by the Digital Future Research Institute and Democratic Party lawmaker Han Min-soo, Kim Hee-kyung (김희경), a senior researcher at the Public Media Research Institute, said most news data and information content no longer has assured quality. She said traditional review systems do not work in the AI era in terms of speed, scale and technology.

According to the international academic journal Frontiers, AI-generated fake news sites increased tenfold in 2024 from 2023, with more than 1,200 emerging, while deepfake videos surged 550 percent from 2019 to 2023. Deepfake fraud attempts in North America jumped 2,137 percent, and 50 percent of all web traffic was generated by bots, with 30 to 37 percent classified as malicious. Fake news spreads 6 times faster than the truth, and while the fact-check delay has been reduced to 15 minutes, researchers said that is still enough time for a false narrative to take hold.

◆ Broadcasting Act, Network Act, Film and Video Act... three-pillar system that cannot stop AI content

The current review system is divided among the Broadcasting Act, which provides ex-post review by the Korea Communications Standards Commission; the Information and Communications Network Act, which allows demands to correct illegal online information; and the Video물 Rating Board Act, which provides pre-classification of films and videos.

Strict pre- and post-review applies to terrestrial broadcasters, but OTT services have introduced a self-rating system in 2023 without a single case of meaningful sanctions. Kim said that, in theory under the network law, the Korea Communications Standards Commission can delete or block obscene, violent and harmful content for youths, but it focuses only on blocking access to obscene and gambling sites on real-time internet broadcasts, leaving actual sanctions on OTT content virtually absent.

Critics say limits on reviewing AI content are clear under the current regulatory framework. Review times cannot keep pace with creation times. Traditional content regulated by existing systems takes days to weeks to produce, but AI can generate content in seconds. A single operator can produce millions of posts using thousands of bot accounts, the explanation said.

Detection and accountability are also difficult. According to the European Parliamentary Research Service, more than 50 percent of AI-generated election misinformation is indistinguishable from real journalism, and when deepfake incidents occur, the multi-layered responsibility structure involving AI model developers, tool providers, users, platforms and re-sharers makes enforcement points unclear.

The cost structure has also reversed. Generating deepfakes costs at most a few hundred dollars, but detection costs millions of dollars. Technology has advanced rapidly from the first appearance of deepfakes in 2017 to real-time video generation in 2024, but legislation is lagging 5 to 7 years behind.

◆ "Self-regulation for large players, co-regulation for smaller ones... paradigm shift is urgent"

Major countries overseas are establishing regulations for AI content.

In the United States, 46 states enacted AI-generated media bills last year, and in January the Senate passed the DEFIANCE Act, which grants victims of non-consensual deepfakes the right to sue for up to $250,000. The European Union brought its AI Act into force in August 2024, promoting innovation in generative AI while regulating deepfakes. Singapore supports autonomy by introducing a verification tool, AI Verify, rather than legal punishment.

India has created an individual law to regulate AI fake content and operates a censorship body called the Fact-Check Center.

Kim proposed directions for improving the review system, including bias audits and mandatory datasets, flexible regulatory mechanisms that respond quickly through rules rather than statutes, adoption of international standard technologies such as C2PA, media literacy education and differentiated self-regulation by operator.

She said large companies should move toward self-regulation centered on transparency reports, while smaller platforms should adopt an approval-based co-regulation model through a private review association. She stressed that the key issue is how quickly authorities can respond, rather than whether to adopt a single law or dispersed laws.

Kim also warned that expanding the government's role as a censorship主体 through revisions to the network law could raise tensions over freedom of expression. She said questions remain over who decides what is "false" and by what standards, and over the scope of "satire" and "parody". She added that media-by-media compartmentalised regulation created in the 1970s and 1980s cannot stop AI misinformation that is generated in seconds and spreads 6 times faster. She stressed the urgency of a paradigm shift that combines technology development, expanded self-regulation, differentiated application by operator and literacy education.

Keyword

#Frontiers #EPRS #DEFIANCE Act #European Union #C2PA
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.