The Higher Regional Court of Cologne recently published the reasoning for its decision of 23 May 2025 regarding the permissibility of Meta’s processing of users’ personal data for the purpose of AI training. The Court found that such processing is lawful under the GDPR.
Background
Following Meta’s announcement of its intention to use public personal data of its adult users for AI training purposes, the Consumer Protection Organisation of North Rhine-Westphalia (“NRW“) sought an injunction to prohibit Meta from processing the personal data of European citizens for this purpose. NRW argued, inter alia, that Meta had failed to demonstrate a legitimate interest within the meaning of Article 6(1)(f) GDPR to justify such processing. The Higher Regional Court of Cologne dismissed NRW’s application for an injunction, holding that Meta’s use of user personal data for AI training purposes is lawful under the GDPR.
Meta has a legitimate interest to process user data for AI training
To assess the lawfulness of Meta’s proposed data processing – specifically, whether Meta has a legitimate interest in using user data to train AI – the court applied the three-part legitimate interest test. The court concluded that Meta’s data processing satisfies the requirements of this test.
- Part 1: the processing is carried out in pursuit of a legitimate interest by the controller or a third party (the “purpose test”)
The court first noted that legitimate interests under Article 6(1)(f) of the GDPR may include commercial interests, and that legitimate interest is recognised by the EDPB as a valid basis for AI training in its opinion of 17 December 2024. The court emphasised that the interest pursued must be articulated with sufficient clarity and precision, and must be real and present rather than merely speculative. The court found that Meta satisfied these conditions. Meta clearly and precisely specified its intention to use generative AI to provide a conversational assistant with certain functionalities. Because Meta intends to commence training the AI immediately, the court concluded that the interest is real and present.
- Part 2: the processing of personal data is necessary for the achievement of that legitimate interest (the “necessity test”)
The court noted that data processing is considered necessary if it is suitable for achieving the intended interests and if there is no less intrusive means of achieving the same objective. The court expressed no doubt that training AI using user data is suitable for accomplishing the stated purpose. The court accepted Meta’s argument that less privacy-invasive alternatives (e.g. using anonymised data or relying solely on so-called flywheel data) would result in an ‘inferior product’ and that no sufficiently reliable alternatives for achieving Meta’s objective are available.
- Part 3: the interests or fundamental rights and freedoms of the data subjects do not override the legitimate interests of the controller or third party (the “balancing test”)
The court finally applied the balancing test and determined that the interests of data subjects do not outweigh Meta’s legitimate interest to use personal data to train AI. The court identified the main criteria for the balancing exercise as: (i) the consequences of the processing, and (ii) the reasonable expectations of data subjects.
(i) With respect to the consequences, the court observed that, because the data was already publicly available, its disclosure does not pose any risk of additional harm. The court acknowledged certain risks associated with the AI models, which may impair data subjects’ rights. Nevertheless, the court recognised that Meta has implemented measures to mitigate the impact, such as de-identifying training data, and providing users with the option to exclude their data from training datasets (by revoking the ‘public’ status from posts or accounts, or by objecting to the inclusion of their data).
(ii) Regarding reasonable expectations, the court found that data subjects could reasonably expect their data to be used for AI training purposes at least from Meta’s public announcement on 10 June 2024, which stated its intention to begin such processing as of 26 June 2024. For data posted prior to this announcement, the court held that it was unable to establish a reasonable expectation of such use within the scope of these proceedings.
As a result, the court concluded that, although there are significant potential impairments to the interests and fundamental rights of data subjects, these do not outweigh Meta’s legitimate interest in AI training.
- For data uploaded by data subjects after 26 June 2024 (the date Meta announced as the commencement of using personal data to train AI), the court determined that the data subjects could reasonably expect that the data they have voluntarily uploaded may be used for AI training.
- With respect to data uploaded by data subjects prior to 26 June 2024, the court points out that individuals have the option to prevent the processing of data by revoking the public status of such data or by filing an objection. The court considers that the interests of the data subjects do not outweigh Meta’s interests, which the court considers to be highly significant. The AI Act articulates the objective of establishing the EU as a global leader in the development of secure, trustworthy and ethical AI (Recital 8) and, to this end, creating a uniform legal framework for AI (Recital 1). The court emphasised that this objective must be considered when balancing the relevant interests. Prohibiting the intended training of AI models with user data would fundamentally undermine this objective.
- Regarding the data of third parties, who are not able to revoke the public status of the data or to object to the processing of their data to train AI, the court found that, due to the relatively low intensity of the interference, particularly in light of mitigation measures implemented by Meta, the interests of the data subjects do not outweigh Meta’s interests, which the court again assessed as highly significant. The court further found it unlikely that any specific disadvantage would arise for data subjects.
Comment
The judgment of the Higher Regional Court of Cologne permitting Meta to process user data for AI training purposes demonstrates that the GDPR does not necessarily constitute an insurmountable obstacle to such activities. Within this legal context, there is considerable room for interpretation, both in restrictive and permissive directions. The extent to which data protection law should constrain technological advancement is not obvious, and largely depends on the policy preferences of the relevant interpreting authority. This decision may be viewed as an example of how a more pragmatic and innovation-friendly interpretation of the GDPR is possible where the decision-maker is prepared to explore that possibility.