Governing AI: A First Principles Approach
Effective technology policy starts by getting back to basics.
Welcome to the a16z AI Policy Brief. We’ll cover the most important AI public policy and legal issues—bridging Washington and Little Tech—and what they mean for the future of US innovation and competitiveness. Thanks for reading.
When a new technology emerges, the instinct to regulate it quickly is understandable. But effective governance starts not with instinct, nor with imitation of past frameworks—it starts with first principles. What, exactly, is the thing we are regulating? How does it differ from what came before? And what are we trying to achieve by governing it?
A New Computing Paradigm
Artificial intelligence is not just another product of the tech industry—it represents a new control layer of computing, altering the way we interact with computers. Traditional computing has been deterministic: when you input A, you reliably get output B. And we have typically interacted with computers through code. That paradigm has defined software for decades.
AI changes that. These systems operate on probabilistic models that approximate human reasoning. Instead of programming every outcome, we train models to predict, infer, and respond. In addition, we now interact with AI like we do with people, speaking in our native language rather than code. This represents a profound shift.
Just as the internet introduced a new way of using computers to connect with each other–transforming our economy, society and interactions globally in profound ways–the current generation of AI represents a change of even greater magnitude. History tells us that when such shifts occur—especially with emerging technologies that are not fully understood—regulation must start with a return to first principles, grounding our policy approach in managing real, not potential, risks.
History tells us that when such shifts occur—especially with emerging technologies that are not fully understood—regulation must start with a return to first principles, grounding our policy approach in managing real, not potential, risks.
Regulating Fire, Not Its Invention
Every technology brings both promise and peril. Fire can be used to cook food and to commit arson. The internet democratized access to information, but at the same time continues to challenge long-standing frameworks for privacy, commerce, and speech. The answer is not to prohibit the invention of fire—or any other technology—but to manage the risks that arise from it effectively. What is the best way to achieve this?
Some advocate for adoption of a precautionary principle—an approach that attempts to regulate potential harms rather than actual risks. This sounds appealing in theory, but is impractical at best, and undermines human progress at worst. It’s the equivalent of saying: if fire can be used for arson, we should limit its use everywhere and only allow fire for cooking food–an absurd proposal that even if it could be implemented would have stifled human progress. A more pragmatic approach would be to just prohibit arson, not fire itself. Precaution as policy inherently deters innovation and undermines societal advancement.
Precaution as policy inherently deters innovation and undermines societal advancement.
A technology neutral, use-based approach, by contrast, starts from the recognition that the purpose of regulation is to address risks that arise from the use of the technology, not to restrain its development as a precaution against every conceivable risk, no matter how remote. It asks where the risks are concrete, significant, and unmitigated, and directs regulation there. This is how the law works today—through regulatory frameworks and legal accountability for identifiable risks. It is also technology neutral—laws of general applicability that govern conduct regardless of the technology used so that the law does not favor older technologies over newer ones and does not have to be revised as the technology inevitably changes. A well-calibrated risk management framework allows innovation to flourish, ensuring that society can benefit from new technologies while mitigating their harms.
Focus on Use, Not Development
Many of the most cited “AI risks” are already covered by existing law. Discrimination in lending is covered by fair banking laws. Deceptive business practices are covered by consumer protection statutes. The use of AI is not a shield from these existing obligations.
A true risk management approach builds on this foundation: mapping where AI introduces new risks, identifying real gaps in oversight, and addressing those gaps proportionally. A precautionary approach that inevitably regulates the math and the code used to build an AI model would, by contrast, restrict development without preventing misuse.
The Role of the States and the Need for Federal Coherence
States have long used their police powers to protect citizens from harmful uses of technology, from privacy protections to product safety. That is appropriate and necessary.
But some state proposals now aim to define and restrict AI model development itself. That crosses into federal territory. Creating a national market for AI is a federal responsibility, both constitutionally and practically. Fragmenting our national technology infrastructure through a patchwork of state laws could not only invite legal challenges, it could also undermine our ability to manage AI risk coherently and weaken US competitiveness at a moment when the international rivalry for AI leadership is intense and the geopolitical stakes enormous.
Open Source and US Competitiveness
America’s leadership in open source has long been a cornerstone of our technological advantage. But that edge is narrowing. Open source AI models developed in China are beginning to match, and sometimes exceed, the capabilities of US models.
Regulatory frameworks that hinder the development of open source models further undermine a key component of both AI safety and US leadership. Open source models can be studied and assessed for concrete risks in a way that furthers the responsible deployment and use of AI. And whether future uses of AI are built using US or Chinese technology will be a key driver of each nation’s ability to shape global norms around this new control layer of computing. The internet developed as an open network that democratized access to information largely because it was based on technology that reflected these values. The impact of AI on global norms promises to exceed that of the internet.
From Precaution to Pragmatism
The first principles of governing AI are simple, but powerful:
Avoid precautionary approaches that inhibit innovation.
Adopt a regulatory framework that focuses on concrete and significant risks by deterring misuse without impeding innovation.
Ensure regulations are technology neutral to avoid status quo bias and premature obsolescence.
Monitor emerging risks for gaps in existing regulatory frameworks as technology evolves.
Maintain national standards of technology development to avoid balkanization.
Promote open source innovation as a competitive and democratic strength.
This is not a call for a “light touch” or “de-regulation.” It’s a call for effective regulation that is grounded in technological reality, constitutional boundaries, and practical governance. AI is too important for reactive measures that don’t hit the mark and instead hinder innovation. Getting this right will determine not just the future of AI, but the future trajectory of progress in America and other free societies globally.
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.




Great piece, framing AI governance from first principles is exactly the conversation that needs to happen. At Lisa Intel, we’re building the infrastructure layer that operationalizes those principles: traceability, action constraints, system-level oversight.
If you believe models should stay free to evolve, but models shouldn’t be free to ignore ethics, law or human authority, let’s build that together.
Europe baked-in a precautionary approach, and I've recently argued that the 'precautionary principle' as it is called in Europe should be jettisoned.
https://substack.com/home/post/p-174028741