





Validate your AI or Platform Idea in 40 Engineering hours. Talk to our Expert →

The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development and/or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.
Considered the world’s first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.
The act also creates rules for general-purpose artificial intelligence models, such as OpenAI’s GPT, DALL-E, Midjourney and Meta’s Llama 3 open-source foundation model.
The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives.
Worthy of mention are the definitions of providers, deployers and importers under the EU AI Act. The EU AI Act covers all AI-driven software and services from general-purpose AI and SaaS platforms to sector-specific applications (finance, healthcare, HR, education, infrastructure) and embedded AI in products with obligations varying by risk level.
A. Providers: Providers are people or organizations that develop an AI system or general-purpose AI (GPAI) model, or have it developed on their behalf, and who place it on the market or put the AI system into service under their name or trademark.
B. Deployers: Deployers are people or organizations that use AI systems. For example, an organization that uses a third-party AI chatbot to handle customer service inquiries would be a deployer.
C. Importers: Importers are people and organizations located or established in the EU that bring AI systems of a person or company established outside of the EU to the EU market.
D. Application outside the EU: The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU. For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.
Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf
The EU AI Act regulates AI systems based on risk level. Risk here refers to the likelihood and severity of the potential harm. Some of the most important provisions include:
The Act is risk-based:
The EU AI Act explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn’t make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice.
The EU Commission can amend the list of prohibited practices in the act, so it is possible that more AI practices may be prohibited in the future. A partial list of Prohibited AI practices at the time this article was published include:
AI systems are considered high-risk under the EU AI Act if they are a product, or safety component of a product, regulated under specific EU laws referenced by the act, such as toy safety and in vitro diagnostic medical device laws. The act also lists specific uses that are generally considered high-risk, including AI systems used:
The EU AI Act creates separate rules for general-purpose AI models (GPAI). Providers of GPAI models will have obligations including the following:
A broad consensus exists among ethicists and technologists on the core principles of Responsible AI, including fairness, accountability, transparency, safety, robustness, and privacy. The EU AI Act operationalizes these abstract principles by translating them into a set of legally binding, technical, and organizational requirements for high-risk AI systems.
The Act moves the conversation from the theoretical to the tangible, compelling organizations to embed responsible practices directly into their engineering and governance frameworks. The core obligations for high-risk AI systems are detailed across several articles of the Act.
These requirements demonstrate a fundamental shift in the AI governance landscape. The Act transforms the abstract concepts of ethical AI into tangible engineering reality. Organizations must now embed processes like continuous risk management, bias testing, and automated documentation directly into their AI development lifecycle. This necessitates a complete re-engineering of workflows, making governance a core engineering function rather than a separate compliance afterthought.
The law mandates that high-risk systems be designed with appropriate interfaces and that human overseers are adequately trained and empowered to override decisions. This highlights the need for a new discipline of “human-in-the-loop engineering,” which focuses on building systems that actively combat human cognitive biases and facilitate meaningful human intervention, thereby ensuring that the oversight is truly effective
High-Risk AI System Requirements (Articles 9-15): Obligations and Practical Ambiguities
To effectively navigate the EU AI Act, organizations must move beyond a static, checklist-based approach to compliance. A more robust and sustainable strategy involves integrating governance directly into the AI development lifecycle. This is often conceptualized as a four-step, iterative governance framework: Alignment, Assessment, Translation, and Mitigation. The process begins with:
The EU AI Act’s ongoing and complex obligations make manual governance unscalable, driving a shift to Compliance-as-Code (CaC) embedding automated compliance checks within CI/CD pipelines. AI agents act as compliance guardians, ensuring continuous monitoring, auditing, and enforcement. The vision is a self-updating policy engine that automatically integrates regulatory updates, turning compliance into a built-in, tech-driven process and creating demand for AI governance platforms that make compliance an operational capability rather than a bureaucratic task.
For noncompliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.
For most other violations, including noncompliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.
The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.
The law entered into force on 1 August 2024, with different provisions of the law going into effect in stages. Some of the most notable dates include:
The EU AI Act is establishing a global standard for Responsible AI, pushing organizations to embed safety, transparency, and accountability into their core practices. It marks a shift from reactive compliance to proactive, built-in governance, making responsible AI a continuous process and a driver of operational excellence, trust, and competitive advantage.