What is the AI Act of the European Union (EU AI Act)?

Part I: The EU AI Act as a Foundational Framework

What is the EU AI Act?

The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development and/or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.

Considered the world’s first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The act also creates rules for general-purpose artificial intelligence models, such as OpenAI’s GPT, DALL-E, Midjourney and Meta’s Llama 3 open-source foundation model.

Who does the EU AI Act apply to?

The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives.

Worthy of mention are the definitions of providers, deployers and importers under the EU AI Act. The EU AI Act covers all AI-driven software and services from general-purpose AI and SaaS platforms to sector-specific applications (finance, healthcare, HR, education, infrastructure) and embedded AI in products with obligations varying by risk level.

A. Providers: Providers are people or organizations that develop an AI system or general-purpose AI (GPAI) model, or have it developed on their behalf, and who place it on the market or put the AI system into service under their name or trademark.

B. Deployers: Deployers are people or organizations that use AI systems. For example, an organization that uses a third-party AI chatbot to handle customer service inquiries would be a deployer.

C. Importers: Importers are people and organizations located or established in the EU that bring AI systems of a person or company established outside of the EU to the EU market.

D. Application outside the EU: The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU. For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.

Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf

What requirements does the EU AI Act impose?

The EU AI Act regulates AI systems based on risk level. Risk here refers to the likelihood and severity of the potential harm. Some of the most important provisions include:

  • a prohibition on certain AI practices that are deemed to pose unacceptable risk,
  • standards for developing and deploying certain high-risk AI systems,
  • rules for general-purpose AI (GPAI) models.
Scope by Risk Categories

The Act is risk-based:

  • Prohibited AI: Certain uses banned outright (e.g. social scoring, manipulative subliminal techniques, untargeted facial scraping).
  • High-Risk AI: AI in critical services (hiring, credit scoring, medical diagnosis, essential infrastructure). These require strict obligations: risk assessments, conformity checks, data governance, human oversight, transparency.
  • Limited Risk AI: Must meet transparency rules (e.g. chatbots must disclose they’re AI).
  • Minimal Risk AI: (e.g. AI in video games, spam filters) have no significant obligations.
Article content
Prohibited AI practices

The EU AI Act explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn’t make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice.

The EU Commission can amend the list of prohibited practices in the act, so it is possible that more AI practices may be prohibited in the future. A partial list of Prohibited AI practices at the time this article was published include:

  • Social scoring systems: Systems that evaluate or classify individuals based on their social behavior; leading to detrimental or unfavorable treatment in social contexts unrelated to the original data collection and unjustified or disproportionate to the gravity of the behavior
  • Emotion recognition systems at work and in educational institutions, except where these tools are used for medical or safety purposes
  • AI used to exploit people’s vulnerabilities (for example vulnerabilities due to age or disability)
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Biometric identification systems that identify individuals based on sensitive characteristics
  • Specific predictive policing applications
  • Law enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is generally required).
Standards for high-risk AI

AI systems are considered high-risk under the EU AI Act if they are a product, or safety component of a product, regulated under specific EU laws referenced by the act, such as toy safety and in vitro diagnostic medical device laws. The act also lists specific uses that are generally considered high-risk, including AI systems used:

  • in employment contexts, such as those used to recruit candidates, evaluate applicants and make promotion decisions
  • in certain medical devices
  • in certain education and vocational training contexts
  • in the judicial and democratic process such as systems that are intended to influence the outcome of elections
  • to determine access to essential private or public services, including systems that assess eligibility for public benefits and evaluate credit scores.
  • in critical infrastructure management (for example, water, gas and electricity supplies and so on)
  • in any biometric identification system which are not prohibited, except for systems that whose sole purpose is to verify a person’s identity (for example, using a fingerprint scanner to grant someone access to a banking app).
Rules for general-purpose AI (GPAI) models

The EU AI Act creates separate rules for general-purpose AI models (GPAI). Providers of GPAI models will have obligations including the following:

  • Establishing policies to respect EU copyright laws.
  • Writing and making publicly available detailed summaries of training data sets.

Part II: Operationalizing Compliance From Principles to Practice

A broad consensus exists among ethicists and technologists on the core principles of Responsible AI, including fairness, accountability, transparency, safety, robustness, and privacy. The EU AI Act operationalizes these abstract principles by translating them into a set of legally binding, technical, and organizational requirements for high-risk AI systems.

The Act moves the conversation from the theoretical to the tangible, compelling organizations to embed responsible practices directly into their engineering and governance frameworks. The core obligations for high-risk AI systems are detailed across several articles of the Act.

  1. Risk Management System (Article 9): A documented, continuous risk management process is required throughout the entire AI lifecycle.This is not a one-time exercise but an ongoing duty to identify, evaluate, and mitigate both known and foreseeable risks to health, safety, and fundamental rights.
  2. Data and Data Governance (Article 10): The Act mandates that AI systems must be trained, validated, and tested on datasets that are relevant, representative, and, to the extent possible, free of errors and biases. This is a direct measure to prevent discriminatory outcomes, as biased training data can lead to unfair or harmful outputs.
  3. Technical Documentation & Record-Keeping (Articles 11 & 12): Providers must maintain detailed technical documentation to demonstrate compliance. This includes information on the system’s design, intended purpose, training data sources, and risk controls. Additionally, high-risk systems must automatically log events in a tamper-resistant manner to ensure traceability, accountability, and post-market monitoring.
  4. Transparency and Information (Article 13): Providers are obligated to supply clear and adequate information to the system’s deployer. This includes detailing the system’s intended purpose, its limitations, and its performance characteristics to enable the deployer to use it appropriately and responsibly.
  5. Human Oversight (Article 14): A key pillar of the Act, this provision requires that high-risk systems be designed to allow for effective human oversight. This means a human operator must be able to properly understand the system’s capabilities and limitations, monitor its operation, and have the ability to disregard, override, or even stop its output.
  6. Accuracy, Robustness, and Cybersecurity (Article 15): High-risk systems must maintain a high level of accuracy, robustness, and cybersecurity throughout their lifecycle to resist errors, misuse, and adversarial attacks.

These requirements demonstrate a fundamental shift in the AI governance landscape. The Act transforms the abstract concepts of ethical AI into tangible engineering reality. Organizations must now embed processes like continuous risk management, bias testing, and automated documentation directly into their AI development lifecycle. This necessitates a complete re-engineering of workflows, making governance a core engineering function rather than a separate compliance afterthought.

Article content

The law mandates that high-risk systems be designed with appropriate interfaces and that human overseers are adequately trained and empowered to override decisions. This highlights the need for a new discipline of “human-in-the-loop engineering,” which focuses on building systems that actively combat human cognitive biases and facilitate meaningful human intervention, thereby ensuring that the oversight is truly effective

High-Risk AI System Requirements (Articles 9-15): Obligations and Practical Ambiguities

Article content
Integrating Compliance into the AI Lifecycle

To effectively navigate the EU AI Act, organizations must move beyond a static, checklist-based approach to compliance. A more robust and sustainable strategy involves integrating governance directly into the AI development lifecycle. This is often conceptualized as a four-step, iterative governance framework: Alignment, Assessment, Translation, and Mitigation. The process begins with:

  1. Alignment, where the goals of the AI system are identified and articulated, and high-level principles are translated into specific, measurable technical and process requirements.This requires collaboration between technical and non-technical stakeholders, as a deep understanding of the AI’s purpose and its potential societal impact is essential.
  2. Assessment involves evaluating the system against the requirements defined during the alignment phase.This includes both technical evaluations for bias, drift, and explainability, as well as non-technical activities like reviewing harms analyses.
  3. Translation is the process of turning the raw assessment data into meaningful and understandable insights for all stakeholders, including business leaders and legal teams.
  4. Mitigation involves taking proactive action, both technical (e.g., retraining a model) and non-technical (e.g., updating user policies), to prevent failures and address risks identified in the earlier steps.

The EU AI Act’s ongoing and complex obligations make manual governance unscalable, driving a shift to Compliance-as-Code (CaC) embedding automated compliance checks within CI/CD pipelines. AI agents act as compliance guardians, ensuring continuous monitoring, auditing, and enforcement. The vision is a self-updating policy engine that automatically integrates regulatory updates, turning compliance into a built-in, tech-driven process and creating demand for AI governance platforms that make compliance an operational capability rather than a bureaucratic task.

EU AI Act fines

For noncompliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.

For most other violations, including noncompliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.

The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.

When does the EU AI Act take effect?

The law entered into force on 1 August 2024, with different provisions of the law going into effect in stages. Some of the most notable dates include:

  • From 2 February 2025 the prohibitions on prohibited AI practices will take effect.
  • From 2 August 2025, the rules for general-purpose AI will take effect for new GPAI models. Providers of GPAI models that were placed on the market before 2 August 2025 will have until 2 August 2027 to comply.
  • From 2 August 2026, the rules for high-risk AI systems will take effect.
  • From 2 August 2027, the rules for AI systems that are products or safety components of products regulated under specific EU laws, will apply.
  • 31 December 2030 Specific high-risk systems as part of large-scale IT systems.

Conclusion: The Future of AI Governance

The EU AI Act is establishing a global standard for Responsible AI, pushing organizations to embed safety, transparency, and accountability into their core practices. It marks a shift from reactive compliance to proactive, built-in governance, making responsible AI a continuous process and a driver of operational excellence, trust, and competitive advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *

Commonly asked questions and answers

Phone:
+91 7770030073
Email:
info@shwaira.com

Stay Ahead of What’s Actually Building!

Subscribe for concise updates on AI-driven platforms, data infrastructure, IoT systems, and execution patterns we use across complex deployments.

Have more questions?

Let’s schedule a short call to discuss how we can work together and contribute to the success of your project or idea.