0:00
/
0:00
Transcript

Who Regulates AI?

The balance of power between Washington and the states could define America’s AI future

Welcome to the first in a series of conversations from the a16z AI Policy Brief featuring policy experts, researchers, and builders on the forefront of AI development. Each discussion explores ideas and debates shaping AI public policy today.


This question has been at the center of policy debates since AI adoption began to soar in the United States and around the world. States have introduced more than 1,000 AI bills this year alone, spanning Alaska to Florida.

Many of these state laws would impose paperwork-heavy compliance regimes aimed at how AI models are built, rather than targeting the harmful uses of the technology. As we’ve written previously, a patchwork of state AI laws could fracture the national market, hit startups with burdensome compliance costs that make it harder for them to keep pace with larger companies, and undermine US competitiveness in the global AI race.

The Constitution divides power between the federal and state governments for exactly this reason. States have authority to police harmful conduct within their borders. In AI policy, this means that states should regulate harmful in-state uses of AI, like fraud and consumer protection. But Congress governs the national market and interstate commerce. When states attempt to set rules for how AI models are built, they risk exceeding their constitutional authority and setting national standards that bind the entire industry. A state like California, New York, Texas, or Florida could set the rules for the nation.

In this conversation, Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, joins Jai Ramaswamy, chief legal and policy officer, and Matt Perault, head of AI policy at a16z, to discuss why the question of who regulates AI is just as important as how we regulate it.

They explore the constitutional guardrails that keep states and the federal government in their respective roles, the real-world risks of a fragmented AI market, and why keeping rules for AI development consistent across 50 states is critical to America’s ability to lead in AI. It’s a dynamic conversation that ranges from today’s AI bills to the Framers’ debates about federalism. If you’ve ever found yourself in an animated dinner party discussion about the Articles of Confederation, this one’s for you.

Key takeaways from the conversation:

Good intentions, wrong target

“...all of this amounts to state legislators who pride themselves on being very close to their constituents and who really want to be responsive to public policy concerns feeling as though this is a moment where regulating first and asking questions later is a wise strategy, which isn’t necessarily the way good public policy gets made…”

State lawmakers may want to be responsive to concerns from their constituents about the impacts of AI. These valid questions from voters deserve attention. But sweeping bills that aim to set national standards for how AI is built rather than how it’s used within a state’s borders cross the line of state authority and may not actually make residents any safer. The group discusses recommendations for how states can target harmful local uses of AI.

Regulating how models are built reshapes the national market

“But the issue arises when states start to go up the tech stack, when they’re no longer dealing just with AI deployers and instead are trying to regulate the developers themselves, when they’re trying to interfere with the actual technology, how AI is being trained, what thresholds have to be crossed before it’s deployed, all of these questions that I think for a lot of folks would agree amount to national questions.”

States have long had the power to govern the use of any technology within its borders—for example, deciding how technologies show up in schools, hospitals, or public spaces. But when they start setting rules for how AI models are built, they’re effectively defining the underlying technology. Particularly considering the way AI models are developed today using open-source and remixed models, that becomes a national question. Regulating use protects people; regulating the technology itself reshapes the national market.

National technologies need national rules

“We are not going to survive as a nation if we persist under our current fragmented economic approach…This needed to be reserved to the national government because to have a fragmented market system was going to undermine the viability of the country itself.”

The group traces today’s AI debate back to the founding era. Under the Articles of Confederation, fragmented state policies threatened to split the nation apart. The Constitution’s Commerce Clause solved that by giving Congress—not individual states—the power to regulate interstate commerce and preserve a unified national market. That same principle applies today: AI models move across state lines, and regulating these systems as a piecemeal patchwork risks repeating the very division the founders warned against.

Local stakes, national benefits

“These are questions of profound significance to voters right now. If you go and ask a voter, what is the most important thing you’re looking for in 2026? It’s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What’s the thing that can start to improve our transportation systems, our energy systems? I can keep going…”

Kevin reminds us that decisions about how we regulate AI will affect how we live: it will have significant implications on the cost-of-living, productivity, and societal progress. If we put it to use in the right ways, AI can lower prices and improve accessibility for everyday services, from healthcare to education to transportation. Overly broad state laws that slow AI development ultimately make these benefits harder to reach. Getting this balance right impacts real economic outcomes for the people lawmakers serve.

Global competition raises the stakes

“…you’re destroying any hope of competition in this realm, of creating a national market where small and big players can compete on a level playing field.”

While US developers navigate conflicting state laws, competitors abroad, including China, are moving fast to standardize and scale. While states are making it harder for startups to keep pace with their deep-pocketed competitors, China is fostering a dynamic innovation ecosystem that enables new companies to emerge. Fragmentation at home means losing ground abroad. To stay competitive, the US needs a consistent national framework that lets builders innovate at speed and scale.

This transcript has been edited lightly for readability.

Kevin Frazier (00:00)

Montanans didn’t like California, Californians don’t like Floridians, Texans don’t like anyone. I’ve lived in all of these places and no one wants to be under the thumb of any other state.

Kevin Frazier (00:12)

If we let California redesign the fundamental engine of all of our cars, for example, that’s gonna lead to nationwide chaos.

Jai Ramaswamy (00:19)

And this gets us to the question of really who regulates AI becomes as important as how we regulate AI, because in some senses, the who determines the appropriate how.

Kevin Frazier (00:29)

As soon as you are admitting to trying to take the role of Congress, whether or not you think Congress should or should not be doing something, you are exceeding the authority of the state.

If you go and ask a voter, what is the most important thing you’re looking for in 2026? It’s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What’s the thing that can start to improve our transportation systems, our energy systems? I can keep going, but we only have so much time.

Matt Perault (01:05)

Kevin, Jai thanks for joining this conversation today.

Kevin Frazier (01:08)

Thanks for having us, Matt.

Jai Ramaswamy (01:08)

It’s great to be here.

Matt Perault (01:10)

So the focus of our conversation is state AI policy. And I think that’s the focus of the conversation because states are really at the forefront of AI regulation. We have seen very few AI bills at the federal level, but lots and lots at the state level. So Kevin, can we start with you? Can you give us an overview of what we’re seeing, both in terms of volume and kind?

Kevin Frazier (01:31)

Yeah, so if we just look at the sheer volume of state legislation, you’re going to need to clear off your bookshelf because there’s approximately 1100 bills pending before the state legislature in 2025 alone, which is just insane. And we could debate the definition of AI related bills for a long time. But the fact of the matter is we’re seeing folks from Alaska all the way to Florida debating how to regulate AI. And I think a lot of this just has to do with the immense uncertainty that people perceive existing in the AI space, right? They read headlines about job displacement occurring in tech, occurring in the creative industries, occurring for just about any profession that has to do with technology. And they’re fearful about what am I going to do for my constituents in this regard? They’re hearing from constituents about the energy usage of data centers and the water usage of data centers.

And there’s obviously very coordinated constituencies that have strong environmentalist interests. And then of course you’re hearing folks about, some of the child safety issues that are arising as we see AI companions become more and more ubiquitous, depending on who you ask, which I’m sure we, may get into. But all of this amounts to state legislators who pride themselves on being very close to their constituents and who really want to be responsive to public policy concerns feeling as though this is a moment where regulating first and asking questions later is a wise strategy, which isn’t necessarily the way good public policy gets made, but in terms of talking points when you’re running for reelection in 2026, saying you did something about AI is a pretty good message to share.

And so I think what we’re seeing is just a natural response of a group of state legislators who want to be known as the AI person in their community who are trying to stake out some sort of territory for taking an affirmative response in opposition to what a lot of people I think would say we did with respect to social media. So I often say that this is sort of the social media hangover phase where everyone’s saying, all right, we will not get tech wrong again. So the best thing to do is to just jump on it and hope for the best. And that’s playing out right now.

Matt Perault (03:49)

So obviously 1,100 bills is a large number and there are a range of different types of bills included in that number. Can you give a sense on one side of the spectrum, like what are the bills that you see as positive or benign and then what are the kinds of bills that you’re tracking that you think raise more concerns?

Kevin Frazier (04:05)

So I think that the legislation that errs on the side of sort of, let’s do it. I think it’s positive. I think it’s responsive to the proper role that states are supposed to play are in these sensitive use cases. So when we’re talking about, for example, how should a doctor use AI in a medical setting in a specific community? That is something that is very much a state question of asking what sort of medical services do we want provided? How do we want to make sure people are receiving care?

That’s something that I think is naturally within the ambit of states. Same goes for a lot of these educational questions. When do you want to see an AI tool being deployed in a K through 12 education setting? There’s no right answer to that question. There’s still a lot of unsettled debates about when and how to introduce kids to AI in an educational context. So states trying to figure that out and mapping requirements onto school districts, for example, makes a heck of a lot of sense. And we’re seeing that these bills are popping up across the country.

But the issue arises when states start to go up the tech stack, when they’re no longer dealing just with AI deployers and instead are trying to regulate the developers themselves, when they’re trying to interfere with the actual technology, how AI is being trained, what thresholds have to be crossed before it’s deployed, all of these questions that I think for a lot of folks would agree amount to national questions because what we’re fundamentally tinkering with here with respect to bills like SB 205 in Colorado with depending on who you ask SB 53 in California and some of these more onerous state pieces of legislation is changing the fundamental way that AI is going to develop. And that’s a national question in my opinion, because it’s somewhat akin to saying that one state has the authority to redesign the engine.

If we let California redesign the fundamental engine of all of our cars, for example, that’s gonna lead to nationwide chaos. If you want to instead change the speed limits in California or change the threshold before you can have a driver’s license in California, that’s fine. You’re not changing the underlying technology itself.

When states start to regulate how the technology itself is developed, then I think we see states interfering with what ultimately is a question that should be left for Congress.

Jai Ramaswamy (06:35)

Yeah, I think that’s a really good way of framing the question. I think the real issue is, what extent the activity that we’re seeing reflects a genuine concern about protecting the citizens and residents of a particular state versus something more, which is like, look, the federal government hasn’t acted. And so we are going to step into the shoes of the federal government to push them along, maybe be the first in the marketplace of ideas, whatever the motivation is. The former bucket seems to me to be exactly the type of things that states should be doing. And the latter really starts kind of getting into impermissible areas.

I guess, Kevin, the only thing that I would add to what you said, which I thought was right on point, that stepping back for a second, and I always put on my historical had in a previous previous vocation. I was a historian of political thought. And now I do that as an avocation, not as a vocation. But I think people forget sometimes that the federal government is a government of enumerated powers. And what that means is that the powers that it can exercise are enumerated in the Constitution or are direct implications of what’s enumerated in the Constitution. Whereas states have historically been seen as exercising plenary police powers, meaning that they have a broader range of things that they can legislate and regulate in terms of human conduct within their borders. And I think this is where the debate gets super interesting, because I think that when you map that kind of historical, and we should realize that that division was put in place for a very specific reason, which is that in the early days of the Republic, when we had the Articles of the Confederation, you had states that could have gone to war with each other because some were blockading other states, that there was actual commercial restrictions being put on each other that could have led to the dissolution of the union. And that’s what gave rise to the Constitution itself. So this is actually a foundational issue, I think, within the larger structure of our government. And I think that the real question is, to what extent are the issues that the states are concerned about, really about their own citizens’ concerns and to what extent they’re impinging. Do you have a sense? I guess it’s kind of an unfair question, but if you had to sort of game how much of the 1,100 bills that we see, how many of them are really motivated by genuine concerns of exercising plenary state power under the 10th Amendment versus really enumerated powers under the…Constitution, what would you say, like, if you had to sort of game it out? Probably an unfair question for you, but...

Kevin Frazier (09:23)

Oof. You know, yeah,

I don’t know if I’ll be able to be as precise. I tend to avoid the P-Doom-like just guesstimates of where my thoughts land on certain things. I do think that the vast majority of those bills are narrow, are directed towards police power type matters, are trying to be responsive to what are regarded as local concerns. And I think as you pointed out, Jai, that’s within the ambit of the state to make sure that they are responsive to truly local concerns.

But Jai, as you teed up, not enough people are paying attention to the fact that some of the sponsors of these bills are explicitly stating, we think we need to act on behalf of the American people and pass this legislation to protect them from X, Y or Z. And that is fundamentally not the authority of the states, right? As soon as you are admitting to trying to take the role of Congress, whether or not you think Congress should or should not be doing something, you are exceeding the authority of the state. And to admit so blatantly that you think your state should be the one that acts as this sort of national protector is just wild because I’ve lived in at least seven states. I actually lose count of how many states I’ve lived in. And I can tell you that each one of those states would hate to adhere to the laws of another state. Montanans didn’t like California, Californians don’t like Floridians, Texans don’t like anyone. I’ve lived in all of these places and no one wants to be under the thumb of any other state. And yet we’re seeing that these apparently wise, beneficial and benefactor state legislators saying, don’t worry the rest of the country, we’ll do it on your behalf.

And Jai, I appreciate you bringing up the Articles of Confederation because I’m explicitly banned from being the first to bring it up. My wife says I just have to stop talking about the Articles of Confederation. But you are so right that we have lost sight of what actually motivated the founders to move away from the Articles of Confederation. Now, I’m going to raise you one. You mentioned the Articles. I want to raise you the Annapolis Convention, right? This took place before the Constitutional Convention. We had just a handful of states get together and these were the biggest states, mind you. Virginia was there, right? Pennsylvania was there, New York was there. And they realized our economic state is deplorable. We are not going to survive as a nation if we persist under our current fragmented economic approach. And that’s what teed up the Constitutional Convention. And so for folks not to realize that when we’re talking about the commerce clause, as I know we’re going to dive into, they were very, very intentional that this needed to be a congressional power. This needed to be reserved to the national government because to have a fragmented market system was going to undermine the viability of the country itself. so having that context is important.

But I also just want to add one other thing, which is a quick quote from a North Carolina legislator. Just to frame the popular understanding of how the founders regarded state governments themselves. So this is from James Airdale of North Carolina, who said that the North Carolina state legislature of which he was a part passed, quote, the vilest collection of trash ever formed by a legislative body. Yeah, you think we got trash.

Jai Ramaswamy (13:00)

And we think it’s bad now, right? We think our politics are bad now.

Kevin Frazier (13:05)

They got bags full of trash, streets littered with it. They were not like, you know, state governments, they’re super on the up and up. Let’s defer to their wisdom on national matters of economic concern. And I’m not saying that state legislators aren’t hardworking, that they’re not attentive to important considerations, but there’s a reason why the founders said we need to cabin off and clearly designate who’s responsible for what.

Jai Ramaswamy (13:31)

The only thing I would add there, and I think this is lost, is that I think that it’s fair to say the founders believed that local and state governments were, in a sense, the place where the people would be represented the most. Their interests would be more closely represented, and the federal government would, in a sense, have a harder time being close to the will and wishes of the people. Where I think that the federal government steps in, and this is key because we’re going to be talking about the global implications of this in a minute.

Back then, the issue was if we started to separate on commercial terms, it may very well be that some of the smaller states would gravitate to Great Britain and the sort of commercial empire that Britain had built and destroy the promise of a unified national state. And so the reason for the federal government having this kind of lock on interstate commerce was as much an international issue as it was a domestic issue. It was that to present ourselves as a united set of colonies and now independent states in a single national entity, the national government had to be able to set policies that would, in a sense, drive international commerce. That was sort of the key. But Matt, I think that that’s fair to say that that’s kind of what’s a concern here as well, right? I mean, that we’re worried about global implications of falling behind in the race for AI.

Matt Perault (15:00)

Yeah, I think that seems exactly right. I mean, I’m trying and it’s not a role I do easily, but to put on my state lawmaker hat. And it seems like there are two senses of duty that a state lawmaker might have when they’re thinking about AI policy. The first, which I really understand, is I was elected to office. I had a policy agenda or set of values in the world that I wanted to project once I got into office. And I want to legislate and then I want to see those laws enforced. And I think when we saw the moratorium debate over the summer, and we supported the moratorium, but I also understood state lawmakers hearing the term moratorium and it sounds like it undermines that duty that they have that like we, was elected and I want to do something good on the policy side. The second duty, I think that state lawmakers are feeling an AI now is one that Kevin alluded to, which is Congress hasn’t acted. And so therefore we need to act in place of Congress. And I think that’s what we’re talking about now, the sort of muddled terrain between what is the role of a state lawmaker, what really should be the domain of Congress. And as we’ll discuss, the fact that Congress has or hasn’t acted doesn’t really change the remit of a state.

Jai Ramaswamy (16:09)

Yeah, think that’s right. And this gets us to the question of really who regulates AI becomes as important as how we regulate AI, because in some senses, the who determines the appropriate how. I think it’s fair to say that states in this realm have historically, in the realm of technology, in the realm of commerce, have historically regulated misuses of various technologies, of various instrumentalities. Whereas the federal government really has a bigger role to play when we’re talking about setting national standards. To your point, Kevin, about cars, regulating an underlying technology that would be very difficult for 50 states to regulate consistently and create a national market for, as opposed to the federal government doing. So I think that that is...is part of it. And this may be a good way to of segue into kind of all the nerdy terms, know, the preemption debates and the dormant commerce clause. Because I think as Matt pointed out, the issue that I think raised a lot of hackles with respect to the moratorium was that it was called a moratorium, which seemed new and novel and something that we don’t do in the United States. We have preemption, but you what’s a moratorium?

On the other hand, I think the other thing was a bit of, I don’t want to call it misinformation, because I don’t know if it was intentional, but a misunderstanding of even that legislation. Because I think there was even under that legislation, a role for states, and it didn’t purport to put a moratorium on all forms of regulation, only on certain forms of state regulation.

So Kevin, it would be great to, if you could kind of give the audience maybe a, I don’t know, 101 on preemption and the dormant commerce clause. That’s what we’re talking about here. And for legal nerds, that’s what we’re talking about. But it would be great for you to do that.

Kevin Frazier (18:13)

Yeah. I’d happily do that. And I welcome you all to fill in blanks as well, because it is a really complex topic. And I think part of what adds to the complexity is I think of our interpretation of the Commerce Clause, like the worst game of telephone that’s ever been played. One justice whispered to the next, like, this is how you interpret it. And then, you know, the next justice and so on and so on. And so our Commerce Clause jurisprudence is so muddied and so murky that folks just aren’t sure how to actually interpret. Under the Commerce Clause, Congress has the ability to regulate interstate commerce. And that has been interpreted myriad ways since the founding. We’ve seen, for example, some formalistic tests saying, okay, what qualifies as commerce? And there are whole debates about how to define commerce and what actually qualifies as a commercial activity.

We’ve also had debates about what does it mean to regulate commerce among the several states? You can find whole law review articles, and I wish I was kidding, just analyzing what does the word among mean and why is among different from between? And you can lose your mind in that debate. Then there’s the really tricky issue of if we afford Congress the ability to regulate interstate commerce.

What does that leave for states with respect to interstate commerce? And there are kind of two main ways of thinking about this. One way of interpreting the commerce clause is to say that Congress alone has the authority to regulate interstate commerce. So if something qualifies as interstate commerce, that is exclusively the domain of Congress. Now, another interpretation would say that so long as Congress has not affirmatively acted to regulate in some way. States may regulate interstate commerce so long as they are not violating any other constitutional principle. This has also been reinforced over time in various Supreme Court opinions that have allowed Congress to basically bless state intervention into interstate commerce. So basically saying, look, we may not want to affirmatively act on a nationwide scale on some matter of interstate commerce but we’re going to grant, essentially extend our authority to a state to do so. And so over time, having this blend of is the Commerce Clause power exclusive to Congress or is it concurrent is a very tricky question. Now, the dormant Commerce Clause is the understanding that even if Congress has not acted, there may be judicial authority to strike down state laws that would interfere with that realm of what we think should be left to Congress alone. So for example, we have seen laws early on in the founding passed by state legislatures that courts regarded as regulating interstate commerce and therefore striking down those laws because they interfered with a domain that the courts interpreted as being left to Congress itself.

Now, traditionally, we think of a couple key categories of what sort of laws may violate the dormant Commerce Clause. The first is all about protectionism, right? We very much don’t want to see states exclusively favor in-state interests over out-of-state interests. If we see that sort of protectionist legislation, the Supreme Court has been very clear of saying that’s going to interfere with the sort of national market we were talking about earlier.

Jai Ramaswamy (21:57)

And that would be sort of a that would be a clear case, I assume of like, I don’t know, Kentucky saying, we’re just not gonna allow out of state pork in Kentucky. Like that’s just not gonna be allowed.

Kevin Frazier (22:11)

So it depends, right? So if we see that in-state and out-of-state producers are placed on uneven ground, right, or are treated differently, that’s really where I’d say we start to see the difference that raises constitutional flags under the dormant Commerce Clause.

Jai Ramaswamy (22:27)

In other words, it doesn’t have to be explicit discrimination saying an out-of state. It can also be, in effect, discrimination.

Kevin Frazier (22:33)

So we can have, if it is facial discrimination, that’s almost certainly going to be struck down under the dormant Commerce Clause. For those facially neutral laws that just tend to have the effect of favoring in-state interests over out-of-state interests, then we run into a very tricky question of whether that burden imposed on interstate commerce is tolerable.

With respect to the local gains we’re seeing as a result of that burden imposed on interstate commerce. And we can dive more into that inquiry in a second. So just to pause for a second, we’ve got our commerce clause, we’ve got our theories of, okay, is this exclusive or is this concurrent? We’ve got our dormant commerce clause where we’re concerned about favoring in-state interests over out-of-state interests and therefore disrupting the ⁓ interstate commerce and the national economy.

And then we have this murky kind of third category that’s hanging out there, which is extraterritoriality. And this is where we see one state explicitly try to regulate commerce that occurs entirely in another jurisdiction. And this has also been declared unconstitutional. Some people would cabin that under the dormant commerce clause theory.

Other folks would say it exists both under the dormant Commerce Clause and it’s an amalgamation of the Full Faith and Credit Clause, the Due Process Clause. You can even say the Guarantee Clause, right? It’s just protected by a whole hodgepodge of things. So across all of those domains, we are left with a very muddied picture that unfortunately the court has, if anything, made even less clear over time. And we are in a period of significant debate about when and how states can regulate with respect to interstate commerce.

Jai Ramaswamy (24:26)

That’s interesting. Matt, I know you and I have talked a lot about these different prongs. Where do you think, when we see AI legislation, where do you think it falls in this hodgepodge of different theories?

Matt Perault (24:40)

So in our piece, we focused on Pike balancing, which is the excessive burden prong. So that’s the idea that if the out-of-state costs substantially outweigh the local and state benefits, then a law is unconstitutional. I think it’s interesting that Kevin, who’s written extensively about this issue, has focused on the extraterritoriality prong. As you said, either part of the dormant Commerce Clause analysis are separate. So I’m curious, Kevin, how do you think about that working in practice? Because like, economies are connected now, it’s likely that anything that a state is going to do is going to affect out of state commerce in some way. How do you think about the extraterritoriality prong being actionable in practice?

Kevin Frazier (25:16)

Yeah. So for me, I focus most of the extraterritoriality analysis less on the economic questions of being able to parse out exactly when a state is wholly regulating commerce in another jurisdiction. Because as you pointed out, Matt, data today, for example, if we just look at how data is stored and transferred and used across basically every state, the argument could be made that any law that deals with data is extraterritorial because at some point that data is going to live in a server that’s in another jurisdiction and you are regulating, state A is regulating how that data is processed or stored or managed in state B. Now we can see that across a lot of questions, whether it’s pollution, for example, and some of the environmental laws, whether it is workplace specifications that are often really hard to cabin to one jurisdiction.

Extraterritoriality can get really murky if you focus mainly on the economic question of trying to segment, finally, is this in one jurisdiction or is this in another? For me, extraterritoriality is most strongly based in the guarantee clause and in the due process clause and the full faith and credit clause. But I want to focus mainly on the guarantee clause. Now, for folks who are not steeped, in constitutional history and as nerdy as the three of us are, the guarantee clause basically says that the U.S. government, and it’s really important to specify this, the government, it doesn’t say the judiciary, it doesn’t say Congress, it doesn’t say the executive, which is very bizarre, right? Anyone who’s a constitutional scholar or anyone who’s just answered a multiple choice question on the U.S. Constitution generally thinks of us cabining each power to each specific branch.

But the guarantee clause says the government will ensure to every resident a Republican form of government. Now again, to go back to how the founders thought about the government and how they thought about the proper relationship between an individual and their government, they weren’t big on kind of virtual representation of, don’t worry colonists, we’ll represent your interests here in the UK. Trust us, you know. Taxes, I know the Stamp Act sounds really bad, but it’s in your best interest. We’ll represent you over here. Just trust us. We fundamentally broke up with that concept. And if you do not appreciate the transition from our status as a colony to the Articles of Confederation to the Constitution as being so fundamentally grounded in Republican governance, then I think you may have missed the first class of AP U.S. history.

And maybe you need to go back and take that one because that is the core of our document is to say you, people, we, people can hold the people who have power over us accountable. yet, Jai, you alone among us three get to vote on what California is doing. Meanwhile, me and Matt are just like, all right, if Gavin Newsom wakes up one day and wants to be really aggressive on AI policy, great, we’re going to have to live with those consequences. And if he happens to veto a couple of bills, great, we didn’t have any say on that either. And now we’re seeing a really kind of radical and we may get there or we may not, but I would encourage everyone to check it out. California AI Ballot Initiative, where the Californians themselves may soon opt to pass pretty expansive and onerous AI regulations that will again impact me and Matt. And we have no authority, no ability to go into that state and try to be a part of that process. So that to me is the greatest focus for extraterritoriality and the sort of strongest argument. We see that become of more concern the greater the ramifications are going to be on the technology itself. And so that’s where I think seeing, for example, state laws that are impacting the fundamental technology itself is depriving people around the country of a voice in the matter of the direction of what can be and what I think is going to be an incredibly transformative technology.

Matt Perault (29:44)

I do think it’s really interesting how the text of the legislation interplays with the technology. So several of the California bills haven’t had any jurisdictional limitation. So if you said, how do we know that California lawmakers are aiming for a national standard? There’s nothing in the text that says limited to development in California or residents in California or companies headquartered in California. Nothing related to deployment in California or specific effect on California users. The text is just wide open and at least in its terms, is not limited to anything related to California. That’s on the text of the legislation side. And then the nature of the technology, and this has been a thing that I’ve been learning from our technical teams, is moving in a direction of more more remixed models. So a developer is building off of open source software that might be developed in another state. They’re taking little bits of that. They’re combining it with other models that might be developed in one place and deployed in another. A deployer might pick up a developer’s model built elsewhere. And so there are all these cross-border dynamics around AI construction, the development process itself, and then the deployment process as well. And those two things, the legislative text and the nature of how the technology is developed, seem to combine in a way that almost like in bright lights invites a discussion of this doctrine.

Kevin Frazier (31:01)

Well, and I think I just, I have to reference one other point because I’m sure there are some listeners who are saying right now, all right, they’re fascinating, they’re good looking, they’re having a really interesting conversation, whatever, that’s great. I still have no clarity around what extraterritoriality actually means, what the Commerce Clause means, what the dormant Commerce Clause means. What the heck? This is clearly not justiciable. There’s no way that a court can sit down and actually try to parse these out.

And I’m gonna push back on that by going again into full history mode. There’s a forgotten thing that a lot of people don’t pay attention to about James Madison. James Madison proposed giving Congress an absolute veto over state laws. This was an actual proposal to say that any state law that gets passed has to then go through Congress, which will determine not if it’s constitutional or not but just if it aligns with the national interest, this was a real thing that James Madison, one of the founders we all talk about, actually proposed. The pushback was to say, and this is one of the most hilarious things in the AI debate, Jefferson retorts, hey, bro, this seems a little extreme. States don’t pass laws that implicate the national interest. This just doesn’t happen. He made a guess that it was about one out of every 100 bills would actually deal with, passed out of state legislatures would actually deal with the national government. So they said, look, this isn’t gonna be a huge issue. We’re going to leave it to the courts to strike down those sorts of laws. And so when we hear justices say, this doesn’t seem like the sort of issue the Supreme Court should answer. This seems like something that should be left to the political branches.

Go back and read your Madison, go back and read your Jefferson. The courts were explicitly designed to fill this function and can’t punt on these issues.

Jai Ramaswamy (33:04)

Yeah, that’s a super interesting observation. And I think it raises this question of how do you think of the appropriate scope of state and federal legislation? To your point, Kevin, we’ve now been talking in generalities. We have the history. I believe we’ve convinced all of our listeners about this structure and that it’s the right thing to do. But now the question becomes, you know, brass tacks.

It’s great that you guys are talking about all this history, but at the end of the day, we’ve got a powerful technology. It’s got uses, it’s got misuses. It has great opportunities and great risks. And so it will be regulated in some way, shape or form. And so how do you think about that real distinction between what the federal government has sort of exclusive authority and power over and what the state governments have a purview over and are legitimately regulating?

Kevin Frazier (34:03)

…The distinction I’ll add is a focus on alternative mechanisms. I’ll get there in a second. So first, I think all three of us agree that the distinguishing factor between are you governing the deployment of the technology? Right? Are you going back to our car analogy? Are you trying to regulate the engine or are you instead regulating use?

You know, how does a driver get their license? What speed can you go on certain roads, even designing the roads all left to the state? And so if states want to regulate that end user deployment, how are you going to use AI in what context? What sort of transparency requirements do you want to provide to your customer? For example, what sort of training do you want to provide to doctors, to educators, to lawyers? All of those things do not have a blatant and obvious attempt to regulate what is a national question. And we can very much see that there is also clear limitations on the likelihood of that law exceeding the jurisdiction of that state. Now, I get that there is some use cases where it’s harder to map on as Matt was noting, there’s, you know, mixing going on now, there’s open source models being poured into other open source models, there’s fine tuning, there’s all of these things that blur that picture, which is why I think it’s also important that we start to hold legislators accountable for exploring alternative mechanisms that are not grounded in burdening interstate commerce.

So I just want to focus explicitly on the mental health issues.I’ve been outspoken that I experienced mental health issues as a child. I think that AI companions and AI therapists, if properly trained, have an incredible role to play in making sure that kids who otherwise would not receive care. And that’s important to point out. We have a therapist shortage. It’s not like if we woke up tomorrow and just said, we want all kids to have therapists, they could suddenly find their therapist. There’s a huge shortage. Why aren’t state legislators making it easier for more students, for more children to access human therapists rather than starting to tinker with the technology itself, to me is just an obvious least burdensome or less burdensome alternative to trying to tinker with technology that’s still developing. And so that’s the two part framework that I would encourage state legislators to think through. First, use versus development or deployment versus development. And second, what is the actual problem we’re trying to solve? Because I don’t think enough state legislators are honest of saying, okay, if the problem is children experiencing mental health issues, then I could list out for you like 50 other things you could do besides banning AI companions that would be more efficacious for actually helping children with those mental health issues.

Matt Perault (37:08)

That delineation between use and development is something that we’ve long focused on and for a variety of different reasons. One is the benefits as you’re describing from regulating harmful use. Like if you really are concerned about harmful use, you should target it directly. And second is the implications of development for not just innovation, but really for the part of the innovation pipeline that we’re most focused on, which is startups, Little Tech companies, because if you’re regulating development, those costs tend to be disproportionately born by startups, it’s hard. have smaller legal teams or non-existent legal teams, smaller policy teams or non-existent policy teams. And so the administrative regulatory burdens try to end up making it harder for them to compete with larger platforms. think the funny thing about that policy framework and then what we’re focused on today is that the state patchwork is also really hard for Little Tech, right? So when we’re talking about state versus federal rules, we’re not just talking about sort of an abstract constitutional principle from our standpoint. The reason that this is something that we have care so much about is that a startup without a legal team trying to figure out how do I offer one tool in California and a different tool in New York and a different tool in Florida is going to be really burdened and be harder for them to compete with large tech companies.

Jai Ramaswamy (38:18)

Yeah. And I think I was going to say that the other thing, you know, we’re, we’re sort of mentioning, but I think it’s worth just being explicit about it. The thing we’re most worried about at, at the state level is, is simply model level regulation. And the reason for that is, I think pretty simple, which is at, at the core, these models, when I say these models, I’m talking about the newest version of, of generative AI, the, the LLMs that everybody’s focused on now. But other models as well are really just math and statistics at the end of the day, right? It’s like taking a bunch of data and using statistical methods, using vector math to slice and dice the data into different categories and then make sense of that data. That’s really what these AI models at core are doing. And to do that, we call it training. You just train on a bunch of the data, the models, are developed and then they have a bunch of data that passes through it that continually makes those models better. And I think our big concern is that maybe there’s a world in which even a big company could train on for 50 different standards, right? So I train my data in Kentucky and then I train it in California to their standards. But really that isn’t feasible. The training takes months and months. Takes you know, as we are seeing enormous costs in terms of computing, in terms of money. And so it’s not feasible to have models trained to different standards in different states. And so de facto, it ends up being a national standard when a state declares that they’re going to regulate at the model level and they’re going to regulate effectively how these models are trained. And that’s our biggest concern is that there isn’t a good way to do state level model regulation.

That feels like the kind of thing that is so inherently federal in nature. And yes, it will definitely harm startups because even if somebody with the adequate resources could do this, which I actually am skeptical of, but even if they could, a startup certainly can’t do it. And so you’re destroying any hope of competition in this realm, of creating a national market where small and big players can compete on an effective playing, you know, on a level playing field.

Kevin Frazier (40:42)

And I’ll note that Representative Liu, a Democrat from California, made that exact point. Everyone can go find it on C-SPAN. Directly asking a witness, hey, tell me, do you think a lab can comply with even two different state training requirements? And the witness kind of shrugged and looked another way and said, but clearly if Representative Liu is aware of this, a very AI savvy person, this is not a matter of necessarily politics or it shouldn’t be to your point Jai. This is a technical question and I’ll raise also that outside of just the sheer economic questions of this to me this is truly a matter of national consequences when we think about pushing the frontier of AI. This isn’t just for fun in terms of I can’t wait to see if the US beats China on the next benchmark. These are questions of profound significance to voters right now. If you go and ask a voter, what is the most important thing you’re looking for in 2026? It’s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What’s the thing that can start to improve our transportation systems, our energy systems? I can keep going, but we only have so much time. AI really can be the biggest driver of a lot of human flourishing across so many domains. So for any state to interfere with our actual progression along that effort to lead on the AI frontier is fundamentally at odds with that sort of national unity that the founders aspired for.

Matt Perault (42:19)

Kevin, you were in DC a couple of months ago testifying at a hearing called AI at a Crossroads, a Nationwide Strategy or Californication. I’m curious what you heard from lawmakers in DC about how they’re thinking about this issue and how the state dynamics are affecting.

Kevin Frazier (42:35)

Yeah, so the committee was really engaged in a way that I think the rest of the panelists didn’t necessarily anticipate, myself included, of hearing that lawmakers really were hungry for strong information, reliable information on this very question of what is the proper role of the states, what’s the proper role of the federal government, and how do we map that on to that technology themselves. So one of the biggest takeaways for me was so much of the AI policy discourse right now is grounded in vibes and what people heard at some point, at some time from some person, they’ve anchored on that position. So for example, we had the ranking member of that committee was very worried that the moratorium and anyone loosely affiliated with being supportive of the moratorium wanted to eliminate the ability of states to continue to enforce the common law and to allow for courts to enforce things and adjudicate things like torts, negligence, just basic liability frameworks, which was never on the table. But to hear those sorts of talking points emerge months later during a committee meeting on AI was indicative of the fact that a lot of this discourse is just being swayed by whatever podcast you listen to, whoever was in your office last and perhaps how you felt about the technology even way back in 2023.

Now, I’ll also say that I was impressed by the rigor of the questions that committee members were asking. To me, it really showed that they were hungry for information. And that’s suggestive of the fact that we need more even keel, thorough objective analysis of AI and of the relevant laws so that lawmakers can make informed decisions. Because I think that the failure to appreciate, for example, what it means to interfere with development and the ramifications that that can have on national interests, that just hasn’t been shared thoroughly enough with a lot of lawmakers who may otherwise be skeptical of reserving that power to the federal government. Now, one of the biggest takeaways, of course, from that conversation was even among folks who may be somewhat willing to consider leaving this issue up to Congress and leaving this to the national government, some feeling that there has to be an affirmative policy response to deal with this AI question. So one of the biggest pushbacks on the moratorium was that it was banning entirely state AI legislation, which again, as we discussed, wasn’t the case, and replacing it with nothing.

And there is a real sense, ask any behavioral economist, we all suffer from action bias. We just want to see something get done. Something that is happening, even if it’s bad, is often perceived as being better than doing nothing, just because we feel like we have to respond in some fashion. And lawmakers are hungry for what that should be. And I would love to talk with you all about that ad nauseum for a future podcast down the road, but minimally, I think that there’s so much we can be doing on how to increase AI adoption. I would love to see a 21st century version of the rural electrification administration, which went county by county, teaching people how to use electricity. Why don’t we have an office of AI adoption? Right? Let’s go send our startups, our researchers into communities and help people learn how to use AI.

We also need to be attentive to the very real concerns about economic displacement. If people feel like they’re going to live in a future in which they don’t have a job, they don’t know how they’re going to provide for their family, it’s hard to say, yeah, I’m really pro-AI. I can’t wait to lose my job. No one has that button. That bumper sticker will not be seen on any license plates. So the federal government has a chance to rethink how can we make benefits, for example, more flexible? How can we help people transition to new opportunities to start those new AI companies? Engine.org is doing fantastic work in this space, recommending, for example, how we can redefine the definition of investor just to make sure that more people can access startup funds and things like that. It was a great experience. I would, of course, welcome the call again, although I think after the fact, they realized they invited the wrong Kevin Frazier. If you Google me, you’ll find that the more popular Kevin Frazier from entertainment tonight, he would have been a way better witness, way more exciting.

Matt Perault (47:17)

Kevin, you mentioned ramifications that are on the minds of lawmakers in DC. Jai, that’s something that we’ve talked about a lot at our firm, the ramifications, not just in terms of California versus Florida or New York versus Texas, but ramifications beyond the borders of the United States. So what is important about getting this issue right for what happens outside the United States?

Jai Ramaswamy (47:38)

Yeah, and I’ll turn to that for a second, but something Kevin said really resonated with me and then I’ll go to the question you posed, Matt. But the thing that resonated with me was that, and I don’t know if the policy is the right one in terms of rural electrification, but the notion that this is really game changing for any society is, I think, underappreciated. And the way I think about it is, if you think about what AI really is, it’s commodified intelligence. It’s commodified and commoditized inference. And if you look around the world at the lack of opportunity that people have, the issue is not that there’s too much inference and intelligence. It’s the lack of access to the tools that allow you to develop that intelligence. And so I don’t see a world in which commoditized inference and intelligence is a bad thing. Like I struggle to understand what that world would be. It’s not that it won’t be misused, et cetera, but at core, I think it’s really getting people to understand that this is what it is. It’s a tool for them. It’s sort of Steve Jobs’ bicycle of the mind on steroids, right? That’s what this is. And that’s really the potential that’s here. And that morphs into the broader conversation, Matt, that you were talking about, which is this.

Technology really is, and I think this is the way we talk to lawmakers about this, it really is a shift. It’s more than an industry, it’s more than a new technology, it’s a change in the way that the control layer of computing operates. It operates through natural language, so we’re talking to computers now rather than coding. I mean, there is coding obviously as well, but with the rise of LLMs, it’s very clear that the way that most people will now interact with computers is by using their own native language. And the way that computers will respond to us will not be sort of deterministic, robotic. You put in A and you always get in B. But like human beings respond to us, the responses are contextual. They depend on the way that the questions are asked. And putting in input A doesn’t always get you output B. It gets you output B, C, D. And so on many levels, you’re talking about a new control layer of computing that puts us in or makes us interact with computers in a profoundly different way that we’ve been used to. The implications of this are huge because these systems do have sort of values of openness or closedness, for lack of better word, built into them in the same way that the internet could have developed very differently if it were an open system versus it were to close system that many advocated for in other countries and continue to advocate. And I think that that’s the geopolitical thing that the US and other countries will struggle with, which is that the countries that have companies that produce this AI that others adopt and that perform that sort of control, the function of computing will have a massive benefit from these outsized to others. It’s why China is investing a ton of money effort in computing, in software, also in hardware to try to be that control layer. And so I don’t think we should diminish at all what the stakes are here, which is what type of computing do we have in the future? And a computing platform based on technology that’s informed by the values of an authoritarian government are going to be very, different than those informed by in a sense the software of open societies, and I say societies because it’s the US, but it’s also others within sort of that framework that are developing these technologies. And I think we have to be prepared to have the conversation that at this point, you know, one of the most popular models being developed on is Qwen is the Alibaba model.

You know, we’re in a neck-and-neck race whether we like it or not between technology that’s being produced by by companies in the US and its allies versus companies that are coming out of China and the implications I think from a geopolitical point are huge it in the same way the internet impacted cultural exchange, soft power as well as hard power, that same dynamic gonna play out on steroids I think here and so I think that’s the the underlying thing, Matt, that you were asking about that I think is really at the forefront of everybody’s minds in DC as they think about this. And the way that this debate plays into that is it’s going to be very hard for the US to compete if the companies that are founded here have to comply with 50 state laws for model development. It’s just hard to imagine that that ecosystem is going to present and produce something that can compete with the likes of the massive efforts that are going on in China. So it has to be a national effort. There isn’t really a choice, I think, once you bring geopolitics into it.

Matt Perault (52:48)

Jai, thanks for bringing it up full circle. Kevin, Jai, thanks. This was fun.

Kevin Frazier (52:53)

Thanks for having us.

Jai Ramaswamy (52:55)

Thanks, Kevin. It’s been a pleasure.


This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.

Discussion about this video

User's avatar

Ready for more?