AI Act and related legal framework

Luca Santalucia Avatar
Data Protection Officer at Namirial

The European Union has taken a significant step in regulating artificial intelligence with the introduction of the AI Act. This landmark legislation, which took shape starting in April 2021 and was formally adopted in March 2024, seeks to ensure that AI technologies are developed and deployed in a manner that safeguards fundamental rights and promotes ethical practices.

The AI Act establishes a broad definition of AI systems, covering any machine-based system designed to operate autonomously and adaptively, capable of influencing both physical and virtual environments. Applying to all entities, whether within or outside the EU, that provide, deploy, or use AI systems intended for the EU market, the Act’s comprehensive scope ensures robust regulatory coverage.

At its core, the AI Act is guided by six fundamental principles:  

  • human agency and oversight, ensuring AI systems empower humans and allow for intervention;  
  • technical robustness and safety, requiring systems to be reliable and secure;  
  • privacy and data governance, mandating respect for privacy rights and data protection laws; 
  • transparency, necessitating that AI operations are explainable to users;  
  • diversity, non-discrimination, and fairness, to avoid biases and promote equal treatment;  
  • social and environmental well-being, encouraging AI to contribute positively to society and the environment. 

One of the Act’s key features is its risk-based classification of AI systems, designed to regulate technologies according to their potential impact. AI systems are categorized into four risk levels:

  • unacceptable risk systems, including those used for social scoring and certain forms of biometric surveillance, are prohibited; 
  • high risk systems, encompassing applications such as credit scoring and employment screening, face stringent regulatory requirements;  
  • limited risk systems, like those designed for consumer interaction, must meet specific transparency obligations; 
  • minimal risk systems, such as spam filters, are subject to minimal oversight. 

This approach allows the AI Act to tailor regulatory obligations proportionally, like the GDPR, which also adopts a risk-based methodology to protect fundamental rights. For example, high-risk AI providers must conduct thorough risk assessments, similar to the data protection impact assessments required by the GDPR. The AI Act also intersects with copyright law, requiring AI developers to respect intellectual property rights when using copyrighted works for training models and to establish policies ensuring compliance with EU copyright rules.

Non-compliance with the AI Act can result in substantial penalties, reflecting the severity of the infraction. Fines for engaging in prohibited AI practices can reach up to 35 million euros or 7% of a company’s global annual turnover, while other violations can incur fines of up to 15 million euros or 3% of global turnover. Additionally, the Act empowers authorities to enforce further measures, such as product recalls and market bans, to ensure adherence to its standards.

By setting a comprehensive and forward-looking regulatory framework, the AI Act is poised to balance the innovation potential of AI with the need for ethical oversight and protection of rights. As it comes fully into force by 2026, the AI Act aims to set a global standard for AI regulation, guiding the responsible development of AI technologies and ensuring they benefit society in a safe, transparent, and fair manner.

Credits to:

Luca Santalucia, Data Protection Officer

Giovanna Pagnoni, Head of Legal

Luca Santalucia Avatar
Data Protection Officer at Namirial