Ethical AI Law Institute

Empowering Responsible AI Usage for Practicing Attorneys


Understanding the EU AI Act and When It Comes Into Force

Understanding the EU AI Act: When It Comes Into Force

Estimated reading time: 1 minute

On August 1, 2024, the European Union officially began implementing the EU Artificial Intelligence Act. This marks the most ambitious and comprehensive regulatory framework for artificial intelligence in the world. With its entry into force, the Act initiates a two-year runway toward full enforcement. This culminates in August 2026. But even today, key provisions are now active. These include bans on certain high-risk and prohibited AI practices. For anyone involved in AI governance, development, compliance, or legal advisory, the clock is now ticking.

We’ve covered the EU AI Act before on the Ethical AI Law Institute, but today represents a legal turning point. This isn’t merely a policy proposal, Commission draft, or trilogue compromise. Now it’s law. The Act was formally adopted in June 2024 following final approval by the European Council. Publication in the EU’s Official Journal triggered today’s start date.

Let’s walk through what’s changing now, what’s coming, and what it means for the legal and AI communities.

A Risk-Based Framework with Teeth

At the heart of the EU AI Act is a risk-based classification system. It treats AI like the potentially disruptive force it is. Rather than regulate AI by function or sector, the Act targets potential risks. These include harm to health, safety, fundamental rights, or democratic institutions.

The Act defines four categories:

  1. Unacceptable Risk: Systems that pose a clear threat to individuals or democratic society—such as manipulative behavioral algorithms, biometric categorization based on sensitive traits (e.g., race or religion), or real-time facial recognition in public spaces—are outright banned. As of today, these prohibitions are in effect.
  2. High Risk: AI implemented in critical sectors like employment, law enforcement, education, or infrastructure must meet stringent requirements. These include risk assessment, human oversight, data governance, accuracy testing, and transparency. These provisions will be phased in over the next two years.
  3. Limited Risk: These systems must adhere to transparency requirements—such as disclosing that an interaction is with an AI system (e.g., chatbots or synthetic media).
  4. Minimal Risk: Most everyday AI, such as spam filters or video game engines, falls into this category and is not subject to regulation under the Act.

The Act also introduces new oversight bodies. It requires conformity assessments for many high-risk systems. It builds a foundation for auditing and market surveillance.

What’s Already Enforceable as of August 2024?

Today’s implementation date activates certain immediate obligations, even though full enforcement isn’t scheduled until 2026. The following are now illegal across the EU:

  • AI systems that manipulate human behavior using subliminal techniques or exploit user vulnerabilities (e.g., age, disability, or economic status)
  • Social scoring by public authorities
  • Predictive policing based on profiling and historical data
  • Unconsented scraping of facial images for biometric identification

These bans carry financial penalties of up to €35 million or 7% of global annual turnover, whichever is higher. This makes the Act not just the most detailed AI legislation to date, but also one of the most enforceable.

A Phased Rollout: What’s Next?

According to the European Commission’s implementation roadmap:

  • February 2025: Enforcement of the bans on prohibited practices formally begins.
  • August 2025: New requirements apply to general-purpose AI models, including disclosure obligations and model documentation.
  • August 2026: Most of the obligations for high-risk AI systems—including the risk management framework and conformity assessments—will take effect.
  • August 2027: Integration obligations apply to high-risk AI embedded in products regulated by EU safety legislation. Examples include medical devices and industrial machinery.

The delayed rollout provides companies, legal teams, and developers with time to prepare, but that time is now finite.

From a legal standpoint, the EU AI Act functions as a hybrid: part consumer protection statute, part digital rights charter, and part regulatory compliance regime. It creates a new category of legal risk that goes beyond its own enforcement provisions. Failure to comply doesn’t just expose companies to administrative fines under the Act itself. It can also trigger cascading liability under existing EU frameworks like the General Data Protection Regulation (GDPR), product safety laws, and anti-discrimination directives. For companies deploying AI across sectors, this overlapping exposure significantly raises the stakes.

The Act also redefines the role of legal counsel. In-house attorneys and compliance officers can no longer treat AI as a peripheral concern. Legal teams must now integrate AI risk assessments into routine compliance reviews. This is especially crucial for companies that operate within the EU. It is also important for those intending to export AI-enabled products or services into the European market. Questions about training data provenance, explainability, or model behavior were once the domain of data scientists. Now, they are quickly becoming core legal considerations.

Finally, the EU AI Act is already producing ripple effects beyond European borders. Although its provisions technically apply only within the European Union, the global implications are clear. Companies with international footprints will find it easier—and safer—to harmonize with EU requirements rather than maintain separate regulatory postures. Meanwhile, jurisdictions such as Canada, Brazil, and even the United States are looking to the EU’s risk-based approach as a template for their own evolving frameworks. As a result, the AI Act is not just shaping compliance obligations in Europe. It is driving a broader transformation in how AI is governed worldwide.

A Template for Global Regulation?

In many ways, the EU AI Act is to artificial intelligence what the GDPR was to data privacy. By setting the bar high, the EU hopes to shape global norms. Whether that ambition succeeds depends in part on what the U.S., China, and other major players do next.

As TechRepublic pointed out in their overview of the Act, this regulation could usher in a new global compliance race. In that race, AI companies tailor products to EU standards as a default to ensure market access. That article rightly calls the Act “a landmark.” It notes how the Act is forcing business leaders to engage with AI ethics at the design stage, rather than as an afterthought.

Final Thoughts

For developers and tech companies, the EU AI Act will force design decisions. It will reshape compliance strategies and shift legal liability frameworks. For legal professionals, it demands fluency not only in law, but in algorithmic systems, model transparency, and digital rights.

We’ll be following this closely and updating our coverage as enforcement deadlines near.



Leave a Reply

Discover more from Ethical AI Law Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading