Ethical AI Law Institute

Empowering Responsible AI Usage for Practicing Attorneys


The Challenge of AI: Can Adaptive Regulation Keep Up?

Estimated reading time: 1 minute

The pace of artificial intelligence is staggering. Just in the past year, generative AI has moved from the margins of public awareness. It has entered the center of global business, education, and national security conversations. But while AI is moving fast, the law is, by design, slow. This raises an urgent question: can our democratic systems keep up with the rapid evolution of AI?

As governments scramble to regulate AI, Europe has taken the lead. It has launched the world’s most ambitious attempt so far—the EU Artificial Intelligence Act. I recently listened to some firm criticism of the EU AI Act along with a novel approach to AI regulation using the tax code proposed by legal scholar Reuven S. Avi-Yonah. Like many scholars, Professor Avi-Yonah cautions that AI’s breathtaking speed makes traditional regulatory approaches unworkable. I agree. To succeed, AI regulation must break with tradition. It should adopt a more flexible, adaptive model. This model must keep pace with the technology. It should not undermine the deliberate, representative nature of lawmaking.

Legislatures Are Meant to Be Slow—and That’s a Good Thing

Democracies don’t move fast, nor should they. Legislative bodies are intentionally deliberative. They are built to give voice to competing interests and prevent rash decisions. These bodies ensure that laws reflect the will of the people over time. This slowness is often frustrating, but it is a fundamental safeguard of freedom and stability.

History offers plenty of examples where deliberate slowness served society well. Major civil rights legislation, environmental protections, and financial reforms have all taken years, sometimes decades, to craft and implement. The result, ideally, is law that is resilient, balanced, and protective of individual rights.

When the object of regulation moves faster than the process of lawmaking, this poses a challenge. Technologies can transform entire economies and societies quickly, in months instead of years. This inherent slowness in regulation becomes a vulnerability.

The EU AI Act: Ambitious but Already Under Pressure

The European Union’s Artificial Intelligence Act is the first comprehensive attempt by a major government to regulate AI technologies. It classifies AI systems by risk level—unacceptable, high, limited, and minimal—with corresponding regulatory obligations for each tier.

The problem? By the time the Act fully comes into force in August 2025, the AI landscape may have already shifted dramatically. AI capabilities are improving exponentially, with new models, uses, and risks emerging faster than lawmakers can react. This is not just speculation. European CEOs, including leaders from Airbus, Philips, Meta, and OpenAI, are urging the EU to delay the Act. They cite complexity, legal uncertainty, and fears of stifling innovation. These concerns are particularly relevant for smaller firms (Financial Times).

This pushback illustrates a deeper problem: static regulation is ill-suited to dynamic technologies. The EU AI Act is ambitious and carefully constructed. However, it represents the inherent limitations of traditional legislative approaches. These limitations become apparent when applied to rapidly evolving technologies. Its fixed categories and predetermined timelines assume a stability that simply doesn’t exist in the AI landscape. This mismatch between regulatory design and technological reality is evident. If conventional lawmaking processes cannot keep pace with AI development, what alternative approaches might succeed?

The Case for Flexible, Adaptive AI Regulation

The answer may lie in abandoning the presumption that effective regulation must be comprehensive and fixed from the outset. Instead, we need regulatory frameworks designed for continuous evolution—systems that can adapt as quickly as the technologies they govern.

Professor Avi-Yonah argues that tax policy is the answer. He advocates using the power of taxation to incentivize desirable behaviors. He also supports using it to underwrite the consequences of undesirable behaviors. This approach is in line with the adaptive regulation models other legal scholars and policy experts increasingly advocate for. In an adaptive regulation model rules evolve over time in response to changing knowledge, risks, and technologies (Harvard Kennedy School). This approach has been used with some success in fields like:

  • Financial regulation (through stress testing and agile supervisory responses)
  • Environmental law (through cap-and-trade systems and iterative rule-making)
  • Medical device approvals (with rolling submissions and accelerated pathways)

In AI, adaptive regulation might include:

  • Sunset clauses: built-in expiration dates requiring laws to be revisited or renewed.
  • Specialized agencies: empowered to issue rules, guidance, and enforcement actions more nimbly than legislatures.
  • Periodic review mandates: requiring formal reassessment at fixed intervals.

These tools allow regulators to move in step with technology without abandoning legal predictability entirely.

Balancing Predictability and Agility

Critics of flexible regulation warn that too much adaptability could create legal uncertainty. It could also undermine the rule of law. Moreover, it might empower unelected bureaucrats. They could make consequential decisions. Even the novel tax policy approach requires policy-setting decisions by policymakers. These concerns are valid—and any adaptive system must balance agility with transparency, accountability, and stability.

One solution lies in embedding flexibility into the law itself. For example, the EU AI Act includes mechanisms for delegated acts. This means some technical rules can be updated without the need for full legislative amendments. But whether this flexibility is sufficient remains to be seen (Lund University).

Rethinking the Role of Law in the Age of AI

The challenge of AI regulation lies at its heart. It involves governing something that is both fast-moving and deeply consequential. We cannot afford either paralysis or overreach.

This is not a new problem. Societies have faced similar challenges with the rise of the internet, biotechnology, and financial engineering. What’s different now is the speed, scale, and potential impact of AI on everything from employment to national security.

Governments must act—but they must act wisely. Adaptive regulation offers a path forward. It respects the need for democratic deliberation. It also addresses the need for responsiveness in the face of rapid change.

Conclusion: Building Regulatory Systems for the Long Term

The challenge of governing AI is not merely technical. It is fundamentally about designing institutions. These institutions must evolve alongside the technologies they oversee. This requires more than just faster decision-making. It demands a reconceptualization of what effective regulation looks like in an age of exponential technological change.

Successful adaptive regulation can achieve significant outcomes. It can create regulatory frameworks that respond to emerging risks before they become systemic. These frameworks can encourage beneficial innovation while preventing harmful applications. They can maintain democratic legitimacy and operate with greater speed and flexibility than traditional legislative processes allow.

This vision is not utopian—it is practical. We have seen adaptive approaches work in other domains, and we have the institutional knowledge to design them thoughtfully. What we need now is the political will to implement them and the wisdom to learn from early experiments.

The stakes could not be higher. AI technologies will likely reshape labor markets, transform military capabilities, and alter the basic dynamics of human communication and decision-making. These changes will unfold over years, not decades, and they will not wait for perfect regulatory solutions.

But neither should we rush toward imperfect ones. The goal is not to regulate AI quickly. The aim is to regulate it wisely. Systems should grow more sophisticated as our understanding deepens. They should also evolve as the technology itself evolves. This means investing in regulatory capacity, fostering expertise within government agencies, and creating mechanisms for ongoing stakeholder engagement.

In the end, the question is not whether AI regulation should be fast or slow, rigid or flexible. The question is whether we can build regulatory systems. These systems need to be capable of learning. They should also adapt and evolve in service of enduring human values. The technology will continue to advance regardless of our regulatory choices. Our task is to ensure that when it does, we are prepared. We must have the institutional capacity to guide that advancement toward beneficial outcomes.

AI may be fast, but human values endure. Lawmakers face the challenge of building a bridge between the two. It must be strong enough to carry us safely into an uncertain future.



Leave a Reply

Discover more from Ethical AI Law Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading