Ethical AI Law Institute

Empowering Responsible AI Usage for Practicing Attorneys


An EU flag representing the EU AI Act code of practice

EU’s AI Code: A New Era of Compliance

Estimated reading time: 1 minute

The Code That Isn’t Law—Yet

The European Union’s new Code of Practice for General-Purpose AI arrived with little fanfare this July. However, it marks a quiet turning point in the global effort to regulate artificial intelligence. The Code is not binding, at least not formally. However, for lawyers watching the architecture of AI regulation evolve, it offers a preview. They can see where the enforcement winds are blowing.

The Code is structured as a voluntary instrument. It aims to translate the high-level principles of the EU’s AI Act into something closer to day-to-day operational requirements. It sets expectations for transparency, copyright compliance, risk assessments, and incident reporting. This is especially important for powerful foundation models like GPT-4, Gemini, and Meta’s LLaMA. For companies with global operations and regulatory exposure, this isn’t just guidance. It’s a test: comply now and shape the future, or resist and accept the risk of scrutiny.

Signing On: A Strategic Play by Microsoft and OpenAI

Microsoft and OpenAI are signing on. Not because they lack concerns, but because compliance offers predictability, and predictability is good for business. Microsoft’s enterprise-focused AI tools like Copilot for Microsoft 365 depend on trust. Trust with regulators, with procurement officers, and with corporate legal departments across Europe. Signing onto the Code signals alignment with EU values and reduces the threat of disruptive enforcement down the line.

OpenAI’s reasoning follows a similar path. Even as it experiments with closed research cycles and evolving licensing models, OpenAI seems to understand the value of engagement. By signing the Code, it becomes a participant in the rule making process. It is no longer just a target of it.

These companies are embracing the Code out of a lawyer’s sense of prudence. Their focus is on risk management and reputation shielding. They also see a chance to help define the contours of what “compliance” will eventually mean under the EU AI Act.

Meta’s Refusal: Control Over Cooperation

Meta took a different route. The company declined to sign, arguing that the Code exceeds what the AI Act actually requires. Joel Kaplan, Meta’s head of global affairs, criticized the framework as imposing unnecessary legal uncertainty and operational burden. More pointedly, Meta warned that adoption of the Code could “throttle the development and deployment” of frontier AI in Europe.

That decision makes strategic sense when viewed through Meta’s open-source lens. With its LLaMA model family, Meta is betting on a more decentralized future for AI. This future allows it to keep flexibility. It also ensures rapid iteration and minimal legal entanglement. Signing the Code would invite oversight Meta doesn’t want, particularly over how it shares and scales its models.

Rather than soft signaling to regulators, Meta is choosing to preserve operational freedom. But that freedom comes at a cost. It includes reputational exposure and potential legal risk. There is also an adversarial stance toward a regulator. The regulator has already shown it’s willing to adjust the law in response to industry pressure.

A French Pressure Point

We’ve seen that pressure before. During final negotiations over the AI Act itself, France achieved a significant goal. They successfully pushed to soften the law’s impact on open-source and foundation model providers. The push wasn’t altruistic. It was driven by a desire to protect companies like Mistral AI—France’s crown jewel in the emerging European AI market.

In the end, France got much of what it wanted. Language in the AI Act was revised to reduce burdens on open-source models. It was also adjusted to narrow the scope of risky systems. That episode made clear that national interests, particularly in AI-heavy economies, can shift the shape of “European” regulation.

France’s earlier resistance to overregulation now echoes in Meta’s refusal to play ball. But unlike France, Meta lacks a sovereign seat at the negotiating table. It’s trying to hold the line alone.

For legal practitioners advising AI developers or enterprise adopters, the Code matters even if your clients aren’t legally bound by it. The code reflects where the European Commission wants the AI Act to go. It tells us how enforcement will likely unfold. The code also shows what expectations will be placed on documentation. And it indicates how compliance will be judged in practice.

The real legal risk is not simply failing to sign. It’s falling out of step with emerging norms. Clients may find themselves answering to procurement offices, regulators, or industry partners. These parties might begin to treat the Code as a default. Nobody may explicitly state this out loud.

And then there’s the broader lesson: in AI regulation, the law is only part of the story. The rest is being written in frameworks like this one—where companies signal, shape, and negotiate the rules that come next.



Leave a Reply

Discover more from Ethical AI Law Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading