#MyLifeIsNotForAITraining
– The Personal Information Protection Commission must investigate and penalize Meta and X for unlawfully using users’ personal data to train AI.
– A user campaign to reject the use of our personal data for AI training
1. On April 16, 2025, Korean Progressive Network Jinbonet, Institute for Digital Rights, and users of Meta and X services, represented by the law firm Jihyang, filed a complaint with the Personal Information Protection Commission (PIPC). The complaint concerns Meta and X’s use of users’ personal data as training data for AI models without a lawful basis, as well as their obstruction of users’ attempts to withdraw consent. The PIPC must promptly investigate the unlawful practices of Meta and X and issue corrective orders and administrative fines to ensure that further violations of users’ rights do not continue.
2. Social networking services like Facebook, Instagram, and X hold records of our lives that span over a decade. By analyzing these records, one can uncover our social relationships, personal tendencies, interests, marital status, location history, and patterns of movement. These records may contain, or make it possible to infer, sensitive information such as political beliefs, union membership, health conditions, or sexual orientation. What’s more, the data used for AI training isn’t limited to publicly available information—it also includes content shared only with selected audiences, service usage data like app activity, and even personal information collected from third-party websites or apps outside of these platforms. The fact that all this data is being used without proper explanation or meaningful choice is deeply troubling. We must not accept this surveillance and violation of privacy hidden behind the justification of AI development.
3. Meta claims that its legal basis for using users’ personal data for AI training is “legitimate interest.” (In contrast, X has not disclosed any legal basis for such data processing, which itself constitutes a violation of the Personal Information Protection Act.) Article 15(1)(6) of the Personal Information Protection Act permits the processing of personal data if it is “necessary for the legitimate interests of the data controller and clearly does not override the rights of the data subject” — provided that the processing is closely related to the legitimate interest and does not exceed reasonable bounds. However, Meta’s use of personal data for AI training fails to meet these conditions.
First, for a “legitimate interest” to be recognized as a lawful basis for data processing, it must satisfy three conditions: (i) the interest must be lawful; (ii) it must be clearly and precisely articulated; and (iii) it must be real and present, not speculative. In the case of Meta or X, however, the stated purposes—such as “developing and supporting AI at Meta” or “training machine learning or AI models”—are overly broad and vague, failing to meet these requirements. These vague formulations imply that any type of AI model, technology, or system could be trained using users’ personal data, effectively asking for unrestricted use of such data. This directly contradicts the core principle of purpose limitation under the Personal Information Protection Act, which requires that the purpose of data processing be specific and clearly defined.
Second, the data processing must be necessary for achieving the claimed legitimate interest—meaning that there must be no less intrusive means available to achieve the same purpose. This requirement must be assessed in conjunction with the principle of data minimization, which mandates that personal data be “adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed.” However, in the case of Meta and X, not only is personal data generated by users or logged by the system, but also data obtained from third parties being used for AI training. This extensive and indiscriminate use of personal information clearly violates the necessity requirement, as well as the principle of data minimization.
Third, the legitimate interest must clearly override the rights of the data subject—this means the balancing test must be satisfied. As discussed above, personal data on social networking services is often vast in scope and sensitive in nature. Using such data without proper notice and consent can significantly impact the fundamental rights of data subjects. This does not only concern the right to privacy, but also extends to other fundamental rights and freedoms, including the rights to liberty and security, freedom of expression and information, freedom of thought, conscience and religion, freedom of assembly and association, non-discrimination, property rights, and the right to physical and psychological integrity. For instance, risks include the potential misuse of collected personal data for unrelated purposes, privacy violations during AI system operations, defamation, and the possibility of exclusion or discrimination against individuals. These harms may arise in unexpected and undesired ways. In particular, it is important to note that research into the negative impacts of AI systems on individuals is still insufficient, making such data use even more concerning from a rights-based perspective. Moreover, the volume of data used for AI training is enormous—encompassing years, even decades, of a person’s life—and may include highly sensitive personal information. It is also crucial to recognize that this data includes information from users who are no longer active on the platforms and whose accounts remain dormant. Once personal data is used to train an AI model, the process is effectively irreversible. Despite this, users were not properly informed in advance, nor were they given a clear opportunity to refuse or withdraw consent. Taking all of this into account, it is difficult to argue that the claimed legitimate interest should override the fundamental rights of data subjects.
Fourth, the legitimacy of a claimed interest may also be assessed in light of the safeguards applied to protect personal data—such as pseudonymization or anonymization. However, Meta and X have not disclosed whether any such protective measures were implemented when using users’ personal data for AI training. In the absence of these safeguards, it becomes even more difficult to justify that a legitimate interest exists or that it could outweigh the rights of the data subjects.
Fifth, as previously mentioned, SNS platforms may contain sensitive information such as political views, union membership, health status, or sexual orientation—and such information can also be inferred through correlations with other data. Under the Personal Information Protection Act, sensitive data cannot be processed on the basis of legitimate interest. It may only be processed if there is a clear legal basis or if the data subject has given explicit, separate consent. If Meta and X did not take steps to identify and remove sensitive data from their training datasets, this constitutes a violation of the law. Given the nature of SNS data—where sensitive and non-sensitive information are often deeply intertwined, and where it is difficult to distinguish between users who consented and those who did not—it is highly questionable whether any sensitive data was lawfully processed at all.
4. Despite amending their privacy policies to allow for the unlawful processing of personal data for AI training, Meta and X failed to properly notify users of these changes. While some form of notice appears to have been given to users in Europe, no such notice was provided to users in South Korea. As a result, users in Korea were deprived of the opportunity to withdraw their consent and object to the use of their personal data for AI training purposes. Once personal data has already been used to train an AI model, deletion or reversal is practically impossible. Exercising the right to object after the fact is also highly difficult. Under Article 37 of the Personal Information Protection Act, users have the right to request the suspension of processing of their personal data. Although Meta and X currently offer an opt-out function, it is hidden behind several layers of settings, making it difficult for users to meaningfully exercise this right.
5. This issue is not being raised for the first time in Korea. On June 6, 2024, following Meta’s announcement that it would update its privacy policy to allow the use of both public and non-public user data collected since 2007 for the development of “AI technologies” starting June 26, the European digital rights organization NOYB filed complaints across Europe. NOYB argued that Meta’s actions violated the General Data Protection Regulation (GDPR) and called on data protection authorities (DPAs) in 11 countries—including Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain—to immediately halt the policy change through urgent procedures. Subsequently, the Irish Data Protection Commission (DPC) requested that Meta postpone the training of large language models (LLMs) using public content shared by users on Facebook and Instagram. Meta agreed to suspend its training plans, and the DPC welcomed this decision.NOYB also filed complaints against X (formerly Twitter) in nine EU countries—Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Spain, and Poland—claiming that X’s use of personal data for AI training violates the GDPR. Investigations are currently underway in these jurisdictions.In July 2024, Brazil’s data protection authority (ANPD) ordered Meta to halt the use of personal data for AI training. While Meta was later allowed to resume such processing under strict conditions, ANPD required that users of Facebook and Instagram be clearly informed of the data use and their right to object, and that they receive this notice at least 30 days before the processing begins. Moreover, Meta was prohibited from using the personal data of minors under the age of 18 until ANPD issues a final decision.The Irish Data Protection Commission (DPC) raised concerns over X’s processing of user personal data for the purpose of training its AI tool, Grok, between May 7 and August 1, 2024. The DPC filed an urgent application with the Irish High Court to stop this data processing. On August 8, 2024, the DPC announced that it had reached an agreement with X to suspend the processing.In the United Kingdom, on July 15, 2024, the digital rights group Open Rights Group filed a complaint with the Information Commissioner’s Office (ICO) against Meta.In Canada, the Office of the Privacy Commissioner (OPC) announced in a press release on February 27, 2025, that it had launched an investigation into X in response to complaints alleging violations of the Personal Information Protection and Electronic Documents Act (PIPEDA).
6. We, as users, are launching a campaign to demand that Meta and X stop using our personal data for training AI models, in order to protect our rights ourselves. We urge as many users as possible to join the campaign and make their voices heard, so that Meta and X are compelled to take this issue seriously. (Campaign website: https://act.jinbo.net/wp/ai-optout)
7. Given the clear and evident violations, we urge the Personal Information Protection Commission to launch an immediate investigation and issue corrective orders along with administrative fines that correspond to the severity of the unlawful conduct. Meta and X must put an end to these illegal practices without delay and begin respecting the rights of their users.
8. In 2022, we filed a damages lawsuit against Meta for its unlawful provision of personal data to third parties, and the case is currently ongoing. We will continue to raise concerns over the illegal infringement of personal data under the guise of AI development by Big Tech, and strengthen our international solidarity efforts—such as hosting global seminars in collaboration with digital rights organizations abroad. Our commitment to challenging Big Tech’s abuse of personal data remains unwavering.
Digital Civil Rights Initiative
(Korean Progressive Network Jinbonetr × Institute for Digital Rights × Law Firm Jihyang)