AI hallucinations as a consumer protection issue: NOVA AI and DeepSeek under the spotlight

In the second half of the year, the Italian Competition and Market Authority (“AGCM“) opened two investigations into potential violations of consumer protection law in the context of AI systems and models. These initiatives illustrate the potential of applying the existing regulatory framework to probe into the use of artificial intelligence, as a complement to the application of the EU AI Act or other specific (AI) legislation.

In the more recent development, at the end of September 2025, AGCM reported about the initiation of an investigation into ScaleUp, a Turkey-based provider of NOVA AI, a “cross-platform chatbot”. NOVA AI has been developed based on other well-known AI technologies, including ChatGPT, Gemini, Claude, and DeepSeek. ScaleUp provides both web-based and app versions, with free and paid options.

Three months earlier, AGCM launched proceedings against two China-based companies (together, “DeepSeek“), which offer the large language model AI product known as DeepSeek.

Both investigations pertain to potentially unfair commercial practices in violation of the Italian Consumer Code. Specifically, the concerns relate to a lack of transparency regarding information essential for consumers to make informed transactional decisions. AGCM highlights insufficient or confusing information about:

  • potential hallucinations, i.e. situations in which, in response to a user’s input, the AI generates output that is inaccurate, misleading, or fabricated (relevant to both DeepSeek and ScaleUp); and
  • the type and characteristics of the product offered (relevant to ScaleUp).

The launch of two investigations into AI products in Italy may signal AGCM’s systematic commitment to ensuring that AI companies comply with their consumer protection obligations. Below we outline five takeaways from the two notices.

1. Information on hallucinations is insufficient: generic, English-only, and not easily accessible

According to the notice on the initiation of the investigation against DeepSeek, the company may have engaged in unfair commercial practices by failing to inform users in a clear, immediate, and intelligible manner that its AI model could produce hallucinations. Specifically, the notice identified the following potential transparency issues:

  • generic, English-only warning in dialog windows: In the dialog windows of DeepSeek’s AI, no explicit warning about hallucinations is provided. The only notice, “AI-generated, for reference only”, is overly generic and shown exclusively in English, even when users interact in Italian;
  • warning in the Terms of Use is inaccessible and English-only: While the Terms of Use mention that outputs generated by the service may contain errors or omissions, this information is not easily accessible. Users must scroll to the bottom of the homepage and click the “Terms of Use” link under the “Legal&Safety” section. This document is not displayed on the first screen when users begin using the service, and is always presented only in English; and
  • no warning elsewhere on the website: The omission is not remedied on other key webpages. Specifically, the risk of hallucinations is not disclosed on the service homepage, the registration/sign-up page, or the login page.

In the notice concerning ScaleUp, AGCM notes that warnings about the risk of hallucinations are entirely absent. AGCM adds that the possibility of hallucinations could be inferred from the fact that the underlying platforms used by ScaleUp’s chatbot inform users of potential inaccuracies, but does not elaborate on the implications of this observation.

2. Information on hallucinations is essential for consumers to make informed transactional decisions

In the notices concerning both DeepSeek and ScaleUp, AGCM emphasises that information on the risk of hallucinations is essential for users to make an informed choice to use these AI products – as opposed to those offered by competitors – a choice that constitutes a transactional decision under the Italian Consumer Code. Without adequate information, consumers may mistakenly believe that they can fully rely on the accuracy and reliability of the outputs generated by these companies’ AI products.

3. A transactional decision exists even when no monetary payment is involved

In the case of ScaleUp, which offers both free and paid options within its AI chatbot, AGCM notes that the choice to use ScaleUp’s AI – rather than competitors’ or the AI technologies on which ScaleUp’s chatbot itself is based – constitutes a transactional decision under the Consumer Code, even when no monetary payment is involved. AGCM cites several Italian court decisions establishing that the free nature of goods or services does not preclude the application of protections provided by the Consumer Code.

4. Insufficient information on hallucinations affects both the consumer’s transactional decision to use the AI service and potential downstream decisions

In both notices, AGCM emphasises that the lack of information about the risk of hallucinations is especially important because DeepSeek’s and ScaleUp’s AI products can be used across a wide range of areas, including those of particular consumer concern such as health, finance, and law. This omission may therefore affect not only the transactional decision to use the AI products, but also downstream decisions made under the mistaken belief that the outputs are fully reliable.

5. Information on the type and characteristics of the product is essential for informed transactional decisions

In addition, regarding ScaleUp’s AI chatbot, AGCM considers information on the type and characteristics of the product to be insufficient and confusing. AGCM notes that such information appears to be essential for consumers to make informed transactional decisions regarding the use of the AI chatbot. Specifically, AGCM identifies issues regarding the following details:

  • specific AI technologies accessible to users: during the free use of the chatbot and in the stages preceding the purchase of a paid subscription, information about which AI technologies users will actually be able to access is confusing. ScaleUp at times refers only to ChatGPT and Gemini, while in other instances it also mentions Claude and Deepseek; and
  • added value of ScaleUp’s AI compared to the free version and base AI technologies: when offering paid subscriptions that provide access to additional features, ScaleUp does not disclose detailed information about the specific versions of the included base AI technologies. This lack of transparency prevents users from adequately assessing the added value of a paid subscription compared to the free version of ScaleUp’s AI chatbot or compared to the AI technologies on which ScaleUp’s chatbot is based.
Comment

The AGCM’s decisions regarding DeepSeek and ScaleUp are a reminder that providers of AI systems may face decisive legal challenges based on laws other than the AI ​​Act or similar dedicated legislation. In this instance, neither DeepSeek nor ScaleUp would be obliged under the AI Act itself to inform the users about hallucinations or the type and characteristics of the product. AGCM relied on consumer protection laws instead to open the investigations. Earlier this year, the Italian data protection regulator (Garante) relied solely on data protection laws as the basis for ordering DeepSeek to block access of individuals located in Italy to its chatbot (decision of 30 January 2025).