[Korea CSO Joint Statement]
The government must accept the National Human Rights Commission of Korea’s recommendations on the Enforcement Decree of the AI Framework Act.
- The AI Framework Act must move beyond an industry-biased approach and protect people affected by the risks of AI
On December 21, the National Human Rights Commission(hereinafter, the “NHRC”) of Korea issued its opinions for improvement regarding the draft Enforcement Decree of the Framework Act on the Advancement of Artificial Intelligence and the Establishment of a Foundation for Trust (hereinafter, the “AI Framework Act”). Ahead of the Act’s scheduled enforcement on January 22, 2026, the draft Enforcement Decree released by the Ministry of Science and ICT (hereinafter, the “MSIT”) was found to require substantial revision from a human rights protection perspective. Our organizations strongly urge the government to accept the NHRC’s recommendations on the draft Enforcement Decree of the AI Framework Act.
The NHRC pointed out that the draft Enforcement Decree announced for legislative notice by the MSIT on November 16, 2025 must serve as a legal and institutional foundation that can effectively protect human dignity and other human rights throughout the entire process of the development and use of AI, and demanded the following improvements.
First,despite the law’s delegation, the draft Enforcement Decree does not specify which areas constitute high-impact AI that poses risks to human safety and human rights. The Enforcement Decree must concretely define high-impact AI areas, and in particular, AI systems that are prohibited in other countries due to concerns that they may inherently infringe upon human dignity—such as emotion recognition in workplaces and schools—should at a minimum be included and managed as high-impact AI.
Second, the obligations of high-impact AI business operators lack provisions for the protection of “affected persons,” such as job applicants, patients, and loan applicants, and this omission must be remedied.
Third, the draft Enforcement Decree excludes AI developed or used “solely for defense or national security purposes” from the application of the Act, while allowing the heads of the National Intelligence Service, the Minister of National Defense, and the Commissioner General of the National Police Agency to designate such AI in an arbitrary and overly broad manner. However, AI for defense or national security purposes is, while a means to ensure national security, also an area where the most serious human rights violations—such as infringements on the right to life—are of concern. Therefore, exclusions from the application of the Act should be restricted, and such determinations should be subject to deliberation by the National Artificial Intelligence Strategy Committee.
Fourth, the scope of entities subject to safety assurance obligations—such as those related to general-purpose AI— should not be limited to a cumulative compute threshold of 10²⁶ operations, which does not currently exist in Korea, but should instead be more broadly regulated by lowering threshold to10²⁵ operations or higher.
Fifth, among the obligations imposed on high-impact AI business operators, the document retention period should be strengthened from five years to ten years, and obligations that are currently framed merely as a “duty to make efforts to cooperate” should be revised to a binding “duty to cooperate.”
Sixth, AI impact assessments should verify the intended purpose of the system and should also be conducted prior to any significant functional changes, and a legal basis should be established allowing state authorities and other public bodies to request AI business operators to submit documents and materials related to such impact assessments.
Seventh, provisions that establish exceptions to government fact-finding investigations solely through the Enforcement Decree, without delegation by the Act itself, should be deleted.
Eighth, human rights experts must be included in the composition of the National Artificial Intelligence Strategy Committee in a balanced manner alongside industry and technical experts, and they must also be able to participate in the processes of conducting impact assessments and developing guidelines.
Our organizations believe that these recommendations by the NHRC should be fully reflected in the Enforcement Decree of the AI Framework Act. While AI technologies can contribute to improving quality of life, their outputs may contain errors or biases. As AI-based decision-making expands across various areas of life—such as labor, education, social welfare, and policing—there is a direct risk of infringing upon constitutionally guaranteed human dignity and worth, as well as the right to equality and the secrecy and freedom of private life. As civil society has repeatedly pointed out during the legislative process of the AI Framework Act, the current law has several limitations in protecting people’s safety and human rights from the risks posed by AI. Nevertheless, rather than remedying these shortcomings, the draft Enforcement Decree has raised concerns within civil society by further relaxing the responsibilities of state authorities and industry actors beyond what is stipulated in the Act, while expanding exclusions and exceptions.
Not only the NHRC but also various civil society groups–-including those working on digital rights, labor, and women’s rights–-have submitted opinions on the draft Enforcement Decree to the MSIT. However, it is doubtful whether the MSIT, as the ministry responsible for the AI Framework Act, is willing to sufficiently consider and reflect these opinions. Immediately after the expiration of the legislative notice period on December 22, the MSIT hastily held an explanatory session on December 24, 2025. While it is not publicly known what discussions took place at this session, given the manner in which the Enforcement Decree has been prepared thus far, it can be inferred that it was a forum focused on hearing the views of industry rather than those of people who will be affected by AI.
This situation raises fundamental questions about democracy in the age of AI. The government, which proclaims the goal of becoming an “AI powerhouse,” has consistently shown a bias toward industry interests in formulating AI policies. However, as a data-driven technology, AI is trained on data about people, including personal data, and it ultimately affects people as its subjects. Although job seekers, workers, women, consumers, students, and teachers—all people—are those who will be affected by AI, they are afforded almost no opportunities to speak out or participate in the development and use of AI technologies. Civil society has continuously submitted opinions since the previous administration and National Assembly throughout the process of enacting the AI Framework Act, yet very few of these views have been reflected.
We urge the government to accept the NHRC’s recommendations on the draft Enforcement Decree of the AI Framework Act. The formulation and implementation of all national AI policies, including the subordinate regulations of the AI Framework Act, must move away from industry bias and establish a legal and institutional foundation capable of protecting people affected by the risks of AI.
December 29, 2025
KPTU, National Health Insurance Workers' Union, Korean House for International Solidarity, Digital Justice Network, Cultural Action, Media Christian Solidarity, Digital Information Committee of MINBYUN(Lawyers for a Democratic Society), The Democratic Legal Studies Association, Citizen's Mediation Center Seoul YMCA, Human Rights Education Center 'Deul', Human Rights Education Center ONDA, Institute for Digital Rights, PSPD, Joint Committee for Freedom of Expression and Against Media Repression, Consumers Union of Korea