Uber judgment: On human involvement in automated decision-making

The recent decision by the Amsterdam Court of Appeals, against Uber, underscores the significance of meaningful human involvement in automated decision-making. The judgment, dated 4 April 2023, provides specifics on what a genuine human involvement should include, in order to ensure lawfulness of the processing.

Background

Uber is an international transportation company that offers online transport services through its digital platform. The company facilitates the connection between passengers and drivers through its applications. Passengers can access transportation services using the Uber Rider app, while drivers effectively manage their operations through the Uber Driver app.

During the period examined by the Court of Appeals, Uber’s Privacy Notice provided general information regarding the processing of personal data. The privacy statement specified, inter alia, that Uber utilized automated decision-making in the processing of personal data, such as deactivating users who are found to have engaged in fraudulent activities or conduct that could harm Uber, its users, and others.

Plaintiffs 1, 2, 3, and 4 worked as Uber drivers, utilizing Uber’s services via the Driver app. Each plaintiff received an individual message from Uber, informing them that their Uber Driver account had been deactivated due to fraudulent activities. While the messages sent to plaintiffs 1, 2, and 4 lacked specific details regarding the breach, the message sent to plaintiff 3 provided information about the behaviour that led to the deactivation of their account.

Uber stated that the decision to deactivate the accounts and end partnerships with each plaintiff was final, with no possibility of reactivation.

Court’s findings

The Amsterdam Court of Appeals considered infringements of Article 22 (Automated individual decision-making, including profiling) and Article 15 (Right of access by the data subject) of the GDPR.

Uber’s decision was based solely on automated decision-making

Each plaintiff challenged Uber’s decision before the court, arguing that the deactivation of their accounts by Uber was contrary to the obligations outlined in Article 22 of the GDPR. The plaintiffs argued that the decision was based solely on automated decision-making, without meaningful human intervention.

Furthermore, the plaintiffs explained that Uber’s decision had a direct legal impact on them, specifically the termination of the partnership with each plaintiff, and that such a decision also affected the plaintiffs in other ways, including the significant loss of a vital source of income.

Finally, the plaintiffs stated that they were not given the opportunity to express their views with Uber or to contest the decision.

Uber attempted to justify its decision by arguing that the decision to deactivate the accounts was made by so-called ERAF software that can detect multiple fraudulent activities based on dozens of rules, as an initial step, and two members of Uber’s operational risk team, as the final step. When making decisions, these employees must adhere to various internal protocols, which include the rule that the employee of the risk team must analyse, besides the information from the software, other facts and circumstances in order to confirm or rule out the existence of fraud.

The Amsterdam Court of Appeals found that Uber’s decision affects the plaintiffs to a considerable extent since it means that the plaintiffs can no longer provide their services by using the Uber Driver app. Moreover, this decision also entails legal consequences for the plaintiffs since Uber has terminated the agreement with each plaintiff.

With regard to plaintiffs 1, 2, and 4, the Amsterdam Court of Appeals established that the decision to deactivate the accounts and to end the agreements was exclusively based on automated processing and that Uber thus infringed Article 22 of the GDPR.

According to the court, Uber did not or did not sufficiently dispute that it resorted to profiling when taking the decision regarding plaintiffs 1, 2 and, 4. On the contrary, as the Court found, Uber evaluated certain personal aspects of the plaintiffs on the basis of data collected from the software, with the aim of analysing or predicting their professional performance, reliability, and behaviour.

Finally, the court stated that Uber did not provide enough explanation that the human factor was included in making the decision based on profiling. Specifically, Uber did not explain before the Court in what way the human factor influenced the decision-making, and the Court saw no evidence that Uber contacted plaintiffs 1, 2, and 4 or otherwise gave them an opportunity to express their point of view.

Regarding plaintiff 3, the Court was of a different opinion. Uber undisputedly stated that a personal interview took place with plaintiff 3 prior to the deactivation, which made it sufficiently plausible that there was actual human intervention.

Uber did not inform the plaintiffs about the existence of automated decision-making

According to the plaintiffs, Uber did not inform the plaintiffs about the logic involved in the automated decision-making and the significance and the envisaged consequences of that processing for the plaintiffs in accordance with Article 15 of the GDPR.

Uber stated in its defence that data requested by the plaintiffs contain trade secrets regarding its anti-fraud processes. Sharing information on this matter could lead to circumvention of those processes and to competitors’ taking advantage of the information. Therefore, in the opinion of Uber, the company could rightfully deny the plaintiffs’ request for obtaining the data, in accordance with the exception provided in Article 15 of the GDPR.

The Amsterdam Court of Appeals found Uber’s claims to be unfounded.

The Court noted that the GDPR only contains an exception to the right of the data subject to receive a copy of the personal data. The exception does not pertain to the plaintiffs’ right to obtain information about the existence of automated decision-making.

In the opinion of the Court, Uber did not provide meaningful information regarding the automated decision-making to the plaintiffs, nor Uber argued that a complete rejection would be proportionate and necessary with a view to the protection of its trade secrets.

The conclusion of the Court was that Uber wrongly rejected the requests for information from plaintiffs 1, 2, and 4 and thus infringed Article 15 of the GDPR. With regard to plaintiff 3, it was not sufficiently shown that Uber wrongly rejected their request for information.

Comment

This case serves as a reminder of the importance of ensuring transparency and meaningful human involvement in automated decision-making, as safeguards for the protection of individuals’ rights and mitigation of potential harm. In particular, human intervention cannot be a symbolic act but rather has to be carried out by someone authorized and competent to change the decision, based on all relevant data. Also, data subjects must have a real opportunity to express their view before the data controller reaches a decision based on automated processing.


[Note: Serbian Data Protection Act mirrors the provisions of GDPR. The decisions of supervisory authorities and courts in EU and EFTA member states may therefore serve as instructive guidance for compliance with local regulations.]