Skip to main content

Controversial Cases on AI in Republic of Korea

[ENGLISH / PDF] Controversial Cases on AI in Republic of Korea

Summary

In South Korea, there have been cases where automatic algorithms and AI have raised concern about the negative impact on human rights. In particular, the opacity and discrimination of recruitment AI have been debated. Additionally, issues like the privacy violation and hate speech of AI chatbot Lee Ruda have raised concerns. The algorithm manipulation of Kakao Taxi and and the Ministry of Justice's immigration identification tracking AI's unauthorized provision of facial information for AI training have also caused a lot of social debate. In some cases, regulatory agencies such as the Korea Fair Trade Commission and the Personal Information Protection Commission have intervened and administratively sanctioned the cases, but the government and some members of the National Assembly have continued attempting to deregulate under the guise of protecting and fostering the domestic AI industry.

 

Controversial Cases on AI in Republic of Korea

 

Introduction 

 

  • Artificial intelligence(AI) products, such as home appliances and automated algorithms, of Korean big tech companies, often referred to as "indigenous portals" in Korean society, have been rapidly dominating the market. However, there has been no effective legal intervention to prevent their negative impact on the market and fundamental rights, including the right to privacy. 
    • 0The Korea Fair Trade Commission (hereafter “KFTC”) has been trying to regulate the unfairness of proprietary algorithms of big techs such as Naver and Kakao, but investigations normally take a long time and are difficult to prove. In 2020, the KFTC determined that NAVER Shopping's self-preference conduct was illegal and imposed a fine of KRW 26.6 billion. This was the first case to apply the Fair Trade Act to unfair conduct through algorithmic manipulation of an online platform, but NAVER filed a lawsuit against the decision, which is currently under trial at the Supreme Court in 2023. 
    • The Act on Promotion of Information and Communications Network Utilization and Information Protection has been protecting personal information since 1999. But it has mainly focused on the issue of personal information leakage, and the issue of misuse for other purposes, such as the collection and use of behavioral information by companies, has been relatively broadly allowed.
    • In 2011, the Personal Information Protection Act (hereafter “PIPA”) was enacted as the basic law for personal information protection, and the Personal Information Protection Commission (hereafter “PIPC”) was established. However, in 2020, in response to the needs of the new technology industry, the so-called "Data 3 Acts", three personal data protection laws, were amended in order to relax the regulation regarding personal data protection, made it more difficult for data subjects to exercise their rights to pseudonymised personal data.
  • Major countries such as the European Union and the United States are pursuing legislation to regulate high-risk AI. 
    • Currently, no legislation exists in the Republic of Korea to prohibit or regulate high-risk AI, nor are there specific requirements for transparency and accountability in public procurement of AI for citizens.
    • In particular, the Republic of Korea has laws that prohibit discrimination based on characteristics such as gender, disability, and age. However, there is no comprehensive anti-discrimination law, making the standards for regulating AI bias and discrimination unclear. The Constitution and the National Human Rights Commission Act declare prohibition of discrimination in principle and provide relief, but it is unclear whether discrimination by AI can be effectively regulated. In addition, there is no legal system to restrict bias and discrimination by AI that affects a specific group of people rather than a specific individual.
  • Public institutions have been introducing automated algorithms and AI, some of which are high-risk AI. However, there is no legal system in place to ensure non-discrimination, legality, due process, and redress of rights.
    • In 2018, the Korea Student Aid Foundation, a quasi-governmental organization that provides student loans to university students, analyzed the factors that affect student loan delinquency through a "Decision Tree Analysis" and published a report on "Characteristics of Student Loan Delinquency". 
      • This analysis was published to highlight the issue of discrimination against young people based on their salary level or university. 
      • However, in Korean society, education and region have historically been important discriminatory factors. The pattern analysis that does not take these into account may stigmatize young people from certain groups and lead to further bias and discrimination when used in decision-making, including financial services. 

 

<Figure> Analyzing delinquency patterns of student loans at the Korea Student Aid Foundation

* Source: JoongAngSunday (Sept. 29, 2018)

 

      • In 2019, the Seoul city pushed to introduce so-called "robot investigators" using AI, but it was halted after the PIPC deemed it illegal.
        • Officials in the city of Seoul, acting as special judicial police officers, investigate cases related to 'crimes against the people's livelihood,' including those in food, healthcare, trademarks, loans, door-to-door sales, and real estate sectors, and send cases to prosecutors. 
        • The robot investigator automatically collects and categorizes tens of thousands of online posts, both public and private, based on the fact that crimes are often committed through social media. This process implies that even if an unspecified number of non-Seoul residents post something on their SNS that includes "Botox" or "special offer for newlyweds," the robot investigator will collect these posts and review them for criminal relevance. 
        • The PIPC believes that the robot investigator's operation is similar to an 'online stop-and-frisk' and determined that it was an unlawful collection of personal information without a legal basis.
      • In 2021, the city of Bucheon, Gyeonggi-do, developed a facial recognition tracking system that recognizes and tracks the faces of COVID-19 cases and contacts in real time on all public CCTVs in the city and automatically collects their cell phone numbers from nearby base stations. However, the implementation was suspended after the controversy was reported in foreign media.
      • The General Act on Public Administration, enacted in 2021, provides that "an administrative authority may impose a disposition using a fully-automated system (including systems in which artificial intelligence technologies are employed): Provided, That the same shall not apply to dispositions imposed at its discretion.”(Article 20) This enables fully automated administrative disposition using AI.
        • The PIPC, as amended in 2023, establishes a provision on the rights of data subjects to fully-automated decisions and provides for the right to refuse such decisions or request an explanation if they have a significant impact on their rights or obligations (Article 37(2)). However, this provision does not cover automated dispositions by administrative authorities as allowed under Article 20 of the General Act on Public Administration. This exclusion creates legal ambiguity regarding the exercise of data subjects' rights in such cases. 
      • In 2023, a taxi with a maximum speed of 110 km/h was ticketed by the police for speeding 142 km/h. A media investigation revealed that the automated enforcement equipment introduced by the local police department had an error and measured the speed of the vehicle in the next lane. People estimate that there may have been more victims in the two years since introducing the equipment. 
        • Experts point out that errors in equipment and measurement methods should be monitored regularly.
        • AI used by police to measure speed, recognize numbers, etc. to issue tickets is considered high risk.
  • Below, we present in more detail some of the cases that have sparked controversy in Korean society, particularly around high-risk recruitment AI, general purpose AI chatbots, platform labor, and AI for immigration control.

 

Concerns about opacity and discrimination in recruitment AI

 

  • The use of recruitment AI has increased dramatically in recent years among both public and private organizations, as public sector recruitment corruptions have become a social issue and the need for contactless work has spread during the COVID-19 pandemic. Currently, AI recruitment tools are mainly used for document review and video interviews. This involvement of AI in the critical decision-making process of hiring poses a high risk. However, companies and organizations using recruitment AI are failing to be transparent and accountable, not preventing bias and not explaining the reasons for their decisions, leading to harm to prospective workers.
    • As of July 2022, 40 out of 252 large private companies (15.9%) have adopted AI interviewing, according to a survey by the Ministry of Employment and Labor and the Korea Employment Information Service. As of 2022, more than 40 public organizations had also adopted AI for recruitment.
    • Organizations and businesses that adopt AI recruitment tools expect them to make the hiring process faster and more efficient, while also identifying individual characteristics and potential that humans are known to have difficulty objectively determining.
    • However, concerns about the opacity and bias of recruitment AI are particularly high among young job applicants. In particular, questions have been raised about how results for metrics such as 'ambition' and 'likability' are calculated and factored into hiring decisions. There is also concern about whether dialect or physical characteristics are unfairly penalized. 

 

<Figure> Examples of assessments by AI recruitment tools

* Source: Hankyoreh21(Oct. 23, 2020)

 

  • Because AI training typically relies on human-generated data, the decisions it makes may reflect human discrimination and bias. As a result, there is a risk that recruitment AI could exacerbate decision-making bias rather than solve the problem of unfair hiring and discrimination. 
    • Developers of AI video interview tools claim these tools can automatically process an applicant's face and voice. They analyze outward and non-verbal characteristics such as facial expressions, emotions, and likability. These tools also evaluate language habits, communication skills, attractiveness, credibility, and logic. Some tools play games based on brain science and neurology to assess correct or incorrect answers, response speed, and decision-making and learning speed.
    • These AI recruitment tools are said to be trained and developed based on the characteristics of incumbent "high performers" and "low performers," as well as the evaluations of hiring professionals such as human resource managers at large companies.
    • However, historical and socially entrenched discrimination and bias, such as gender, age, geography, physical condition, economic status, and educational and academic background, may be reflected in data and AI models. Moreover, the preferences of dominant job market demographics, often favored by large organizations, could influence AI decision-making regarding these groups.
    • Even if developers exclude direct characteristics from variables that may lead to discrimination and bias, the system may still produce indirectly biased results through proxy variables. For example, it is possible to intentionally exclude applicants of a certain gender by training data on populations with similar characteristics without directly asking for gender.
    • Lack of representativeness of training data also contributes to biased results. If the data on faces, expressions, and behaviors of people with disabilities are insufficient or absent, and the algorithm lacks adequate learning about them, it might negatively judge certain behaviors or expressions of these individuals. In this way, AI-based decision-making can be biased against minorities who are less represented in the data.
    • In particular, AI emotion analysis features included in AI interview tools have faced criticism from international organizations, including the United Nations High Commissioner for Human Rights. They argue that these features lack scientific evidence and should be discontinued. The negotiating position on the AI Act, which was adopted by the European Parliament on 14 June 2023, would have banned emotion recognition in workplaces and educational institutions.
  • While human hiring processes are prone to discrimination and opacity, AI systems that learn from these processes inherit these flaws. In fact, certain AI techniques, such as deep learning, are even more opaque than human decision-making and cannot explain reasons for rejection. 
    • AI used to make important decisions about people, such as hiring, could have a serious negative impact on individual human rights if it cannot explain the reasons for its decisions. 
    • The inability of high-risk AI to explain its decisions undermines organizational accountability. It could also lead to more widespread and covert employment discrimination, and undermine trust in society at large.
  • In particular, the use of AI in public decision-making requires high transparency and accountability. However, the current proliferation of AI recruitment tools in public institutions does not guarantee the corresponding explainability and fairness.
  • On July 7, 2020, the Korean Progressive Network Jinbonet requested information disclosure from 13 public institutions that used AI recruitment tools and demanded the disclosure of data that would confirm ▲ whether public institutions followed fair recruitment procedures, ▲ whether data protection rules were violated, and ▲ whether discrimination and bias occurred.
    • However, many public institutions have not disclosed the data, citing reasons such as 'trade secrets' or lack of access to data due to outsourcing of AI interviews. 
    • Even public institutions that disclosed their data relied on AI companies for their hiring processes and decisions using AI. Most public institutions only received applicant evaluation results from AI companies and did not directly evaluate or oversee the data and algorithms of their AI recruitment tools.
    • In other words, public institutions were using AI in a high risk area of human rights violations like recruitment, without adequately equipping or disclosing meaningful information to those affected. This means that public institutions are automating critical decisions, but without the accountability to ensure transparent and reasonable explanations for the results. 
  • On October 10, 2020, the Korean Progressive Network Jinbonet filed a lawsuit to cancel the refusal to disclose information against a public institution that did not respond to the information disclosure request in good faith. 
    • On 7 July 2022, the court ruled in favor of the applicant in part, finding that the refusal to disclose the information was unlawful.
    • The court dismissed claims for information that the defendant institutions claimed they did not have and that private companies had, confirming that the defendant institutions did not have key information about recruitment AI. Institutional measures are needed to address the fact that public institutions are using high-risk AI for hiring decisions and failing to evaluate those tools or even minimize their use to ensure fairness and accountability.
    • In addition, there was a limitation that civil society organizations outside the institution, rather than applicants themselves, were unable to verify the actual recruitment bias through a request for information. There was no way to determine whether there was any recruitment discrimination prohibited by current laws, such as the Fair Hiring  Procedure Act. 
  • As the controversy over AI recruitment grows, the Ministry of Employment and Labor begins to investigate the situation and consider institutional measures.
  • Persons subject to AI-driven decisions in high-risk areas such as recruitment or by public authorities should be provided with an explanation of the reasons for the decision and guaranteed the right to appeal and remedy. 
    • Providers and users of high-risk AI, including public sector AI and recruitment AI, must prepare and maintain appropriate documentation. In particular, regulators must be able to effectively investigate and intervene in cases of unlawful discrimination.

 

Challenges with AI Chatbots: Privacy and Hate Speech

 

  • A.I. chatbot Lee Ruda is a Facebook Messenger-based chatbot service launched in December 2020. It gained widespread popularity for its ability to closely mimic the speech patterns of real women in their 20s, and 750,000 people used it within two weeks of its launch. However, the chatbot has faced controversy since its launch due to issues of sexual objectification and hate speech directed at women, people with disabilities, LGBTQ individuals, and black people. In particular, violations of the PIPA were confirmed in the process of the  development and operation of the service, and authorities fined the company in April 2022.

 

<Figure> Example of Lee Luda’s hate speech

In this dialogue, the AI chatbot Lee Ruda makes offensive comments about the LGBTQ community, women, people with disabilities, black people, and other social minorities.

* Sources: MoneyToday (Jan. 9, 2021); Yonhap News Agency (Jan. 10, 2021); Aju Korea Daily (Jan. 11, 2021)

 

  • The ScatterLab Co., developer of Lee Luda, has been criticized for using private KakaoTalk conversations of users collected through another service it launched in the past to develop Lee Luda. The company temporarily suspended the service after the PIPC launched an investigation.
  • On April 28, 2021, the PIPC announced the results of its investigation and imposed fines and penalties of approximately 100 million won (about 76,000 USD) on ScatterLab. As a result of the investigation, it was confirmed that ScatterLab used the KakaoTalk conversations collected from its 'Textat' (launched in February 2013) and 'Science of Love' (launched in May 2016) to develop and operate Lee LuDa. The PIPC determined that the development and operation of the service was illegal because it used personal information outside the purpose of collection.
    • ScatterLab used over 9.4 billion sentences of KakaoTalk conversations from about 600,000 users to train its AI model. The company did not take any measures to delete or encrypt personal information such as names, cell phone numbers, and addresses contained in the KakaoTalk conversations.
    • In operating the service, about 100 million sentences from KakaoTalk conversations of women in their 20s were extracted and added to a database. Lee Luda then selects sentences from this database to generate responses. However, the process to pseudonymize personal information in the database was highly inadequate.
    • ScatterLab included a vague statement in the privacy policies of "Textat" and "The Science of Love," which collected the KakaoTalk conversations. These policies indicated that the data was collected for "new service development," and ScatterLab assumed that the user consent was implied by logging in. However, the PIPC found the consent process inadequate, as it failed to clearly demonstrate users' agreement to use their data for developing Lee Luda. The PIPC also noted that the vague phrase 'new service development' in the privacy policy did not reasonably allow users to anticipate that their conversations would be used specifically for developing Lee Luda.
    • In particular, ScatterLab illegally collected and used personal information of 200,000 children under the age of 14, as well as sensitive information contained in the conversation, and also kept and used the personal information of people who withdrew from the membership or had not used the service for more than one year. 
  • This case shows that as AI is developed and operated for service, it must comply with current laws, including the PIPA, and gain the trust of data subjects. In the context of AI and data protection issues, there is a growing controversy over pseudonymized data. The PIPA allows the processing of pseudonymised information without the consent of the data subject for purposes such as 'scientific research'. The PIPC determined that it was illegal for the company not to pseudonymized the training data for Lee Luda and that the database for the service was insufficiently pseudonymised. 
  • However, even if the company effectively pseudonymizes data for AI development and training, it's still unclear whether this falls within the legal scope of 'scientific research.' This ambiguity leaves a gray area in the current legal framework. 
  • Hundreds of victims of unauthorized KakaoTalk conversation use are actively suing the company for damages, which is in progress. The company released Lee Luda 2.0 in October 2022 after an inspection by the PIPC. 
  • The issue of bias in AI chatbots has the potential to reinforce prejudices that society has against certain groups. In particular, while chatbots themselves are not considered high-risk, their generalizability means that they can sometimes be used in high-risk areas. In fact, ScatterLab's chatbot engine would be embedded in another company's AI speaker. AI speakers have recently been widely used in school education and are being deployed in elderly homes through public services. If these general-purpose AI chatbots are used in high-risk areas such as tests and assessments at school, and public services, they could lead to very serious human rights violations if they spark hateful dialogue or cause discriminatory decision-making.

 

Manipulating KakaoTaxi's algorithm

 

  • KakaoT, often referred to as the 'national taxi app' in the Republic of Korea, has covertly discriminated against unaffiliated taxi drivers for years. Despite denials, the KFTC's investigation uncovered manipulation in dispatching algorithms, including 'AI dispatching,' leading to strong corrective measures and fines against the company.
  • KakaoT is a platform service that connects taxi passengers and drivers, and was launched in March 2015. As of 2021, the app has 31 million subscribers in the Republic of Korea (the total population of the Republic of Korea is 51 million) and 10 million monthly active users (MAUs).
    • KakaoT is used by 9 out of 10 taxi drivers in the Republic of Korea, and Kakao Mobility, the company operating KakaoT, has become the dominant operator in the regular taxi hailing market, controlling 92.99% of the market as of 2021. 
  • On March 20, 2019, the subsidiary of Kakao Mobility launched 'KakaoT Blue', a paid affiliated taxi app that charges 99,000 won per month separately from KakaoT. Suspicions of discrimination between 'regular' and 'affiliated' taxi app users began to surface. Seoul and Gyeonggi-do also announced that their own investigations found possible discrimination. Four taxi business organizations filed a complaint with the KFTC, claiming that "regular taxis are not dispatched even if they are within a short distance of the requesting passenger, but are instead dispatched to affiliated taxis that are further away."
    • In response, Kakao Mobility denied all allegations, claiming that ‘no unfair distribution had been made.’ Kakao Mobility disclosed the working principle of the 'AI Dispatching System' and established a 'Mobility  Transparency  Committee' composed of external experts to conduct its own investigation. The committee analyzed all 1.7 billion taxi call dispatch history data and announced that there was no discrimination in the AI dispatching logic. 
  • On February 14, 2023, KFTC announced the results of a three-year investigation. As a result, it was confirmed that Kakao Mobility actively manipulated the dispatching algorithm of the KakaoT app, funneling calls preferentially to affiliated taxis and giving them preferential treatment. As a result, KFTC imposed a fine of 25.7 billion won (Approximately $20 million) on Kakao Mobility, along with a corrective order to stop the discriminatory dispatching. 
    • Kakao Mobility secretly operated an algorithm that funneled 'regular' calls to affiliated drivers from the time it launched its affiliated taxi service on March 20, 2019. 
    • Initially, from March 2019 to April 2020, Kakao Mobility implemented a specific algorithm. If an affiliated driver was within a certain passenger pickup time (e.g., 6 minutes), the company would prioritize dispatching to these drivers. This was done even if non-affiliated drivers, who were closer (e.g., within 0-5 minutes), were available.

 

<Figure> how KakaoT funneled calls to affiliated taxis

* Source: KFTC press release (Feb. 14, 2023)

 

    • AI dispatching was introduced in April 2020. This system ensures that affiliated drivers with an acceptance rate of 40-50% or more are allocated rides before non-affiliated drivers. However, Kakao Mobility had managed the acceptance rate of affiliated drivers to reach 70-80% prior to the introduction of AI dispatching. This management unfavorably impacted non-affiliated drivers, who had an acceptance rate of 10%. For example, the acceptance rate is calculated in a way that favors affiliated drivers in that if a call is not accepted, non-affiliated drivers are considered to have 'declined', whereas the same call is not considered a decline by affiliated drivers.
    • In addition, since February 2020, the company has excluded affiliated drivers dispatching for short-distance calls under 1 kilometer, which are less profitable.
    • This resulted in higher fare income for affiliated drivers than non-affiliated drivers and a significant increase in the number of affiliated drivers. As a result, Kakao's share of the affiliated taxi market, which has been competing with UT, which is a taxi brand that is a joint venture between Uber and T-Map Taxi, and others, skyrocketed from 14.2% in 2019 to 73.7% in 2021.
  • Kakao Mobility filed an administrative lawsuit against the KFTC in July 2023, and the trial is underway.
  • Taxi hailing platforms' algorithms are high-risk because they monitor and assign tasks to participating taxi workers. However, Kakao Mobility concealed the manipulation and discrimination of its algorithm by the name of an "AI dispatching" and deceived taxi workers and consumers. The company's "voluntary" algorithm source disclosure to the Mobility Transparency Committee proved to be ineffective. Only after an investigation by a regulator specializing in this area, such as the KFTC, were the facts confirmed and corrections made.

 

Unauthorized provision of facial information for AI immigration identification and tracking system by the Ministry of Justice

 

  • On October 21, 2021, an inspection by the National Assembly and media reports revealed that the Ministry of Justice and the Ministry of Science and ICT provided facial information of nationals and foreigners to private AI companies without the consent of the data subjects in a project to build an AI system to identify and track the faces of immigration at Incheon Airport.
    • The 'AI Identification and Tracking System Construction and Demonstration Competition Project', which has been carried out since 2019, provided about 170 million domestic and foreign facial data records, collected for immigration purposes, to 10 private companies. This data was used for AI training and algorithm verification without the subjects' consent.
    • The facial recognition technology to be developed through this project aims to go beyond the ability to identify oneself with a 1:1 match in a stationary state to perform facial recognition with a 1:N match for an unspecified number of people in a moving state.
    • The amount of facial data provided was 57 million photos of nationals who entered and left the country between February 3, 2005, and October 20, 2021—the day before the incident became public, and 120 million photos of foreigners who entered and left the country between August 23, 2010, and October 20, 2021. Approximately 170 million cases, deemed 'suitable data' for training, were 'extracted' from the roughly 320 million cases with mugshots in the original immigration data held by the Department of Justice.
    • In 2021, as part of these projects, CCTV cameras were installed at the immigration checkpoint at Incheon International Airport. These cameras were used to acquire footage of both domestic and foreign citizens, aiming to collect what was termed 'real data'.

 

<Figure> facial data processing procedure for AI identification and tracking system

* Source: PIPC's Deliberative Resolution (April 27, 2022)

 

  • In response, MSIT argued that the processing of facial data was legal under the PIPA.
    • Facial recognition information is one of the types of biometric information that are specially protected by the PIPA and can only be processed with the separate consent of the data subject or explicit provisions of law.
    • However, MSIT argued that the system was for the processing of tasks related to the original immigration purpose and that the provision to the private companies was for a delegation of personal information processing that did not require separate individual consent.
    • In particular, the National IT Industry Promotion Agency (NIPA), an agency under the Ministry of Science and ICT, which was in charge of the project, published an issue report introducing that the reason behind the project was to reduce the burden on domestic facial recognition companies. In other words, individual companies are limited in acquiring large-scale facial data because it is difficult to obtain the consent of the data subject under the PIPA, the cost of collecting and processing is burdensome as it costs between 20,000 and 100,000 won per person, and the companies must bear the administrative responsibility. The NIPA claims that the project was able to help private companies easily acquire and learn from high-quality, large-scale facial data held by the government.
  • On the other hand, civil society organizations argued that the use of facial data collected for identity verification in immigration screening procedures as training data for the development of the AI system without the consent of the data subject is an illegal third-party provision outside the purpose of collection. 
    • MSIT claimed that it was an outsourcing of personal information processing, but the project did not use facial data in the process of building a system for the Ministry of Justice. The participation of at least 12 companies in the project, each pursuing their own interests, challenges the legitimacy of this as a proper outsourcing of personal data. 
    • These companies used immigration facial data to train and refine their algorithms in the name of "demonstration" or "performance validation," but the relevance of their algorithms to DOJ immigration purposes is unclear. Some companies have acquired proprietary intellectual property rights, such as patents, for their developed algorithms and have sold these in foreign markets.
    • Therefore, civil society organizations argue that the primary purpose of using facial data was not to develop DOJ systems, but rather to enhance the companies' proprietary algorithms. They contend this constitutes an unconstitutional and unlawful use of personal information, deviating from its original purpose.
  • CCTVs for "real data" were quickly removed after media reports, the national assembly's inspection of the administration, and concerns raised by civil society organizations. Subsequently, the demonstration lab where the algorithms of participating companies were trained was closed, and all data used for training were destroyed. 
  • On April 27, 2022, the PIPC announced the results of its investigation into the case and imposed a fine of KRW 1 million (approximately $770) on the Ministry of Justice. The PIPC determined that the use of immigration data to develop AI was within the scope of the purpose of the Immigration Act and fell within the scope of legitimate entrustment. It also emphasized that the participating companies' algorithms used personal information only for training of AI and no personal information was leaked. However, it only imposed a small fine on the Ministry of Justice, the data controller, for failing to disclose the fact of entrustment.
    • However, civil society organizations have criticized that immigration data is used for identification purposes in the immigration process and cannot serve as a legal basis for providing facial data for system development. 
  • Twenty national and foreign data subjects requested access to the Department of Justice to ascertain whether their facial data had been used for training purposes. However, the Ministry of Justice denied the request, claiming that the data used for training could not be identified from the immigration data source because the individual's name, date of birth, resident registration number, and passport number had been removed, and the data contained only a mugshot, nationality, gender, and year of birth. It also claimed that it had destroyed all training data used in the project. On May 18, 2022, the applicants filed a dispute mediation with the Personal Information Dispute Mediation Committee under the PIPC, but it was dismissed, with the explanation that it was impossible to confirm whether the data had been used after it was destroyed. 
  • Civil society organizations also requested a public interest audit of the case from the Board of Audit and Inspection of Korea, which dismissed the request, stating that it had not found anything particularly illegal.
  • On July 7, 2022, civil society organizations filed a constitutional petition with one Korean and one foreigner as claimants. The petitioners and civil society organizations pointed out that the processing of personal information in this case violated fundamental rights, such as the right to self-determination of personal information, and was carried out without any legal basis, and that there was no notice or opinion collection from the data subject. They also argued that the purpose and means of real-time public facial recognition and tracking were not justified and appropriate, and that no measures were taken to minimize the infringement of fundamental rights or to achieve a balance of legal interests, thus violating the principle of proportion.
  • Following this incident, on January 25, 2023, the National Human Rights Commission issued an opinion recommending legislative measures to the Chairman of the National Assembly and the Prime Minister for the protection of human rights against facial recognition technology. 
    • The National Human Rights Commission recommends that the adoption and use of facial recognition technology by states should reflect the principle of respect for human rights, limit indiscriminate adoption and use, and establish criteria for exceptional and complementary use only when there is a recognized public interest need. It also notes that the adoption and use of facial recognition technology should be based on individual and specific laws.
    • In particular, as the use of 'real-time remote facial recognition technology' in public spaces for unspecified groups of people poses a significant risk of violating fundamental rights, it recommends that the introduction and utilization of real-time remote facial recognition technology by the state be prohibited in principle. The Commission also recommends that public institutions suspend (moratorium) the use of real-time remote facial recognition technology in public spaces until legislation is in place to prevent the risk of human rights violations.
  • The Identification and tracking AI system of the Ministry of Justice is a high-risk AI because it can be used for immigration screening and management. In particular, the real-time recognition and tracking of biometric information such as faces and movements of unspecified people in public spaces such as immigration checkpoints should be prohibited because it grossly violates human rights. 
    • In particular, high-risk AI used by public institutions has restricted the fundamental rights of citizens by utilizing their sensitive information for large-scale AI training. However, the inability to verify the details of the harm and even the involvement of parties is a serious violation of transparency and accountability. In the case of high-risk AI, it is necessary to ensure transparency and accountability by mandating records of the development process, including training data.

 

Conclusions

  • As we have observed, debates have arisen in the Republic of Korea regarding the negative human rights impacts of high-risk AI. 
    • In some instances, regulatory agencies like the KFTC and the PIPC have partially intervened and imposed administrative sanctions on violations. 
    • However, there has been a lack of swift application of existing laws to rapidly evolving new technologies, and a clear legal framework regarding the scope of prohibited AI or the obligations of high-risk AI is absent. Consequently, this difficulty in mitigating the risks of high-risk AI, involving regulators, and addressing the rights of victims persists.
  • In the case of high-risk AI, mechanisms should be established for regulators to effectively intervene and safeguard the rights of victims. To achieve this, legal obligations should be imposed on both users and providers of high-risk AI, including those in the public sector.
    • The national oversight system, which includes existing regulators like the KFTC, National Human Rights Commission, and PIPC, must operate effectively. To accomplish this, high-risk AI providers should be required to maintain records, such as documentation, and transparency and information provision should be mandated.
    • Providers of high-risk AI and public sector AI should take measures to assess and mitigate future risks, with a particular focus on preventing data bias.
    • Users of high-risk AI and public sector AI should be obligated to conduct human rights impact assessments to analyze and proactively prevent negative human rights impacts.
    • Real-time facial recognition in public places should be prohibited by law.
  • Specifically, Korea lacks a comprehensive anti-discrimination law, making it challenging to seek remedies for collective discrimination. Hence, legal measures should be implemented to prevent discrimination in AI. 
    • Historical and socially constructed discrimination and bias in our society, such as gender, age, geography, physical condition, economic status, education, and academic background, can result in long-term, widespread, and covert discrimination when mirrored in AI.
  • However, the Korean government and certain members of the National Assembly are advocating for an AI bill aimed at deregulating AI, citing the necessity to safeguard and promote the domestic AI industry.
    • On August 24, 2023, the National Human Rights Commission  expressed its opinion that the principle of "allow first, regulate later" should be removed from the AI bill under discussion in the National Assembly and that a human rights impact assessment should be introduced.

 

* Written by IDR & Jinbonet, September 2023

* Funded by APC 

* For more information, please contact : idr.sec@gmail.com