I want to be clear about what I am not arguing. I am not arguing that artificial intelligence systems should be unregulated. I am not arguing that the risks associated with powerful AI models are imaginary. I am not arguing for the kind of libertarian technology policy that treats regulatory oversight as inherently hostile to progress. What I am arguing is that the regulatory frameworks now being drafted in Washington, Brussels, and Beijing share a set of structural assumptions that will systematically advantage incumbents, entrench existing power distributions, and make it harder — not easier — to build AI systems that are genuinely safe and beneficial.
The compliance burden problem is the most obvious and least discussed. Every major AI regulation proposal currently in circulation requires extensive pre-deployment documentation, third-party auditing, incident reporting infrastructure, and ongoing monitoring systems. For a company with thousands of engineers and a multi-billion dollar legal and compliance operation, this is a manageable overhead cost. For a research lab, a university spin-out, or a startup in a lower-income country trying to build AI tools for local agricultural or medical needs, these requirements are not manageable at all. The effect, intended or not, is to reserve participation in the AI economy for organizations large enough to absorb the compliance overhead — which is to say, for the organizations that are already dominant.
The audit problem is subtler but more consequential. Current proposals envision third-party audits of AI systems to assess safety and compliance. But auditing an AI model is not like auditing a bridge or a drug. The behavior of a large language model or a reinforcement learning system depends on the interaction of billions of parameters, the distribution of training data, the specific contexts in which it is deployed, and emergent properties that no pre-deployment audit can fully characterize. Building a regulatory framework on the premise that AI systems can be certified as safe in advance of deployment misunderstands the technology in a way that will produce compliance theater rather than actual safety improvements.
"The goal of AI regulation should be to make it easier to build AI that is genuinely safe and beneficial, not to make it expensive to compete with the companies that already exist."
— Marcus T. Webb, TWT Opinion Contributor
What would better regulation look like? It would focus on outcomes rather than processes. Instead of mandating how AI systems must be built and documented, it would mandate what they must not do — and create liability frameworks that give companies strong incentives to avoid those outcomes. It would create safe harbors for open-source and research applications that make it practical for non-commercial actors to participate in safety research. It would invest in shared public infrastructure — datasets, evaluation tools, red-teaming frameworks — that reduces the cost of responsible development rather than only increasing the cost of irresponsible deployment.
The most important thing regulators can do right now is resist the lobbying of companies that have every incentive to support regulations they helped design. The major AI labs are not opposed to regulation; they are opposed to regulations that would disadvantage them relative to competitors. When incumbents enthusiastically support a regulatory framework, that is not evidence that the framework is good for the public — it is evidence that the framework is good for the incumbents. Regulators who want to do right by the public should treat that enthusiasm as a warning sign, not as validation.
