Go to main content

Artificial Intelligence (AI)

Artificial Intelligence (AI)

What is the AI Act?

The AI Act is a directly applicable European regulation aimed at governing the development, deployment, and use of AI within the EU internal market. It categorizes AI systems by their level of risk to fundamental rights and sets obligations for both deployers and providers.

Who are providers and deployers?

  • Providers: These are developers or those who commission a developer to design an AI system. For example, OpenAI is the provider of ChatGPT.
  • Deployers: These are individuals or entities using the AI system in a professional capacity. For example, a public authority or company using ChatGPT to create summaries as part of their business or activity.

What is generative AI?

It is a type of AI that creates sound, images, text, or video content. Examples include ChatGPT, Copilot, Grok, and Meta AI.

Why regulate generative AI?

The aim is to ensure that AI-generated content is recognizable as such, by imposing an obligation to indicate the artificial nature of the created content.

  • Because while generative AI can be used for positive purposes—such as illustrations, advertising, entertainment, or educational tools—it can also be misused to spread misinformation, harm individuals, or mislead the public.
  • Because AI-generated content, whether used positively or negatively, is increasingly easy to produce and disseminate, yet harder to detect and identify.

What is a deepfake?

Under the AI Act, a deepfake is a type of audio, video, or image content that is distinguished by its realism and resemblance to a real person or place.

  • Voice cloning: mimicking a person’s voice.
  • Visual likeness: mimicking a person’s physical appearance (face, gait, etc.).
  • Combination of both.

What is the luxembourgish legal framework for AI (Bill No. 8476)?

Bill No. 8476 primarily aims to designate competent authorities to supervise and control AI systems in Luxembourg. 8 authorities have been appointed, including ALIA, which has new competences in the field of generative AI.

What are ALIA’s responsibilities regarding AI?

  • Under Bill No. 8476, ALIA is set to become a “market surveillance authority” responsible for enforcing transparency obligations under Articles 50(2) and 50(4) of the AI Act. This includes ensuring that providers and deployers implement techniques that allow individuals to clearly distinguish AI-generated content. Example: The transparency requirement is met when a publication featuring a deepfake includes the label “AI-generated content.”
  • ALIA is also responsible for monitoring prohibited practices as defined in Article 5 of the AI Act (see ALIA’s opinion on Bill No. 8476).
  • ALIA ensures that both providers and deployers falling under Articles 50(2) and 50(4) maintain control over their AI systems.
  • ALIA ensures that providers and deployers under its oversight comply with Article 4 of the AI Act regarding AI literacy (see ALIA’s opinion on Bill No. 8476).
  • ALIA only performs this oversight within the context of professional activities. These may include advertising campaigns, internal corporate uses, or any generative AI that is directly linked to a company’s service delivery.

What is a prohibited practice?

The AI Act prohibits certain AI practices (“prohibited practices”) because they are contrary to EU values, human rights, and the rule of law.

In the context of generative AI, relevant prohibited practices include those that manipulate, deceive, or prevent individuals from making informed decisions and can harm others. Examples include:

  • Deepfakes used during elections to mislead voters (videos, images, or audio).
  • Deepfakes generated from child sexual abuse material.
  • Certain AI companions could foster unhealthy emotional attachment in users (generative AI platform Replika, or a current case involving Characters.ai in Florida).
  • AI-generated content containing subliminal messages used in advertising.

What are the risks associated with deepfakes? (Non-exhaustive list)

  • Deception: AI-generated content can fabricate entirely false situations.
  • Disinformation: Deepfakes can be used in disinformation campaigns to discredit individuals, political parties, or movements.
  • Sexual exploitation: Deepfakes can hyper realistically superpose someone’s face onto a nude body, possibly increasing “revenge porn” scenarios.
  • Child sexual abuse material: Deepfakes can be created using images of children, producing undetectable illegal content.
  • Harassment: Deepfakes can be used for harassment, especially in cases involving lies or sexual exploitation.
  • Fraud: Deepfakes can impersonate individuals to obtain benefits or deceive others.

Obligations related to prohibited practices

  • Providers must ensure that their AI systems cannot be used to engage in prohibited practices. To this end, they are subject to a range of obligations during the AI system development.
  • Deployers must refrain from using AI systems for such purposes.

What are the penalties for non-compliance?

For both providers et deployers:

  • Warning
  • Reprimand
  • Up to €7.5 million or 1% of global annual turnover for providing false, misleading, or incomplete information.
  • Up to €15 million or 3% of global turnover for transparency obligation breaches (Article 50(2) and (4)).
  • Up to €35 million or 7% of global turnover for prohibited practices (Article 5).

ALIA does not have jurisdiction over:

  • Private uses of generative AI. For example, an individual who creates and publishes a deepfake on social media outside any professional activity does not fall under ALIA’s supervision. However, such content may still be punishable under Luxembourg criminal law if it constitutes a criminal offence.
  • Uses of generative AI in criminal investigations or prosecutions.
  • The content of deepfakes themselves is not subject to supervision, only if it constitutes a prohibited practice under the AI act may it fall under ALIA’s jurisdiction.