A bill has been proposed to introduce a labeling system for AI-generated content and enable swift responses to false or exaggerated AI advertising.
Cho In-cheol (조인철), a Democratic Party lawmaker on the National Assembly's Science, ICT, Broadcasting and Communications Committee, said on Tuesday he had proposed amendments to the Information and Communications Network Act and the Act on the Establishment of the Broadcasting Media Communications Commission.
As AI-generated content such as deepfakes spreads rapidly through information and communications networks, conditions are being created in which users find it difficult to judge authenticity. The bill was prepared out of a sense that institutional mechanisms to regulate distribution stages such as platforms are inadequate.
The AI Basic Act, set to take effect on Wednesday, stipulates notification and labeling obligations for outputs of generative AI, but this is limited to AI businesses. Cho said there has been a gap in labeling obligations and management responsibility at the distribution and dissemination stage, such as portals and platforms.
The amendment to the Information and Communications Network Act would impose on platform operators a duty to maintain and manage AI-generated content labels for posters and users. It would also impose a labeling obligation on anyone who directly produces or edits AI-generated content and posts it. It would also prohibit users from arbitrarily removing or damaging AI-generated content labels.
The bill would allow the Broadcasting Media Communications Commission, upon requests from relevant central administrative agencies such as the Ministry of Food and Drug Safety and the Fair Trade Commission, to ask platforms for temporary corrective measures even before review by the Broadcasting Media Communications Review Committee in cases where there are major concerns about harm to people's lives or property.
An amendment to the Act on the Establishment of the Broadcasting Media Communications Commission, proposed together as a package, would add false or exaggerated AI advertising in areas directly tied to public health, such as pharmaceuticals, cosmetics and medical devices, to the scope of written review, enabling emergency responses.
Cho said, "If the AI Basic Act dealt with the responsibilities of AI developers, this bill is supplementary legislation that clarifies the responsibility to protect the public at the distribution stage, such as platforms." He added, "A significant part of the bill also aligns with the government's direction of response."