One of the core pillars of our roadmap for federal AI legislation makes clear AI should not excuse wrongdoing. When people or companies use AI to break the law, existing criminal, civil rights, consumer protection, and antitrust frameworks should still apply. Enforcement agencies should have the resources they need to enforce the law. If existing bodies of law fall short in accounting for certain AI use cases, any new laws should be evidence-based, clearly defining marginal risks and the optimal approach to target harms directly.
In this conversation, we go deeper on what that principle means in practice with Martin Casado, general partner at a16z where he leads the firm’s infrastructure practice and invests in advanced AI systems and foundational compute. Martin has lived through multiple platform shifts—as a researcher where he worked on large-scale simulations for the Department of Defense before working with the intelligence community on networking and cybersecurity, a pioneer of software-defined networking at Stanford, and the cofounder and CTO of Nicira, which was acquired by VMware—giving him a rare perspective on how breakthrough technologies are governed as they develop and scale.
Martin joins Jai Ramaswamy and Matt Perault to discuss how decades of technology policy can inform addressing harmful uses of AI, defining marginal risk in AI, the importance of open source for long-term competitiveness, and more.
Topics covered:
01:55: A brief history of recent debates about how to regulate AI
12:30: Regulating use vs. development: lessons from software and cybersecurity
15:47: An open question in AI policy today: defining marginal risk
18:33: Why social media is often the wrong analogy for AI regulation
20:50: Enforcement tools available for holding bad actors to account
24:11: Balancing many trade-offs in tech policy
27:33: The role open source models play in soft power, the future of AI, and global competitiveness
38:06: Implications of regulatory uncertainty
41:32: Lawmakers want to act; what can they do now to enact effective policy?
This transcript has been edited lightly for readability.
Martin Casado (00:00)
Open source is always a critical part of the innovation ecosystem because while it’s not the number one business driver like the proprietary models are, it’s what’s used by hobbyists, it’s what’s used by academics, it’s what’s used by startups. And that tends to be the future. And so this uncertainty in the regulatory environment is keeping U.S. companies from releasing open source models that are strong. And as a result, the next generation, the hobbyists and the academics are using Chinese models. And I think that’s actually a very dangerous situation for the United States to be in.
Jai Ramaswamy (00:29)
To claim that at the outset of the internet, you could have foreseen how social media would develop, be used and misused is kind of a fairy tale. Like that couldn’t have happened back then. It can only happen once the risks emerge and are known. And then you can figure out what the bad things are that you want to regulate.
Martin Casado (00:47)
If we focus on development and we don’t focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that’s being developed. And right now there actually is no single definition for AI. And everyone we’ve used now looks totally silly because it’s evolving so quickly. So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things.
Matt Perault (01:16)
This is a fun conversation for me because I get to ask Martin and Jai some questions about how you guys were thinking about AI policy before I joined the firm.
So a couple of years ago, the scene was really different than it is today. Sam Altman’s testifying in Congress, Brad Smith at Microsoft is talking about things like licensing regimes for AI, an international regulatory agency that would regulate AI just like nuclear, international nuclear regulatory agencies do. Jai, can you just start with telling us a little bit about how the firm reacted to that. Like, how did we put that in context in terms of what AI policy might look like and what we were concerned about?
Jai Ramaswamy (01:55)
I think that for us, the big eye-opener was the Biden executive order that came out at the tail end of the Biden administration. And that order did two things that I think seemed very different to us than what had come before in the regulation of software, of computing.
The first thing, it made a nod in the direction of wanting to regulate fundamental math and computing power through restrictions on the types of models that would use certain amounts of computing power, right? Flop thresholds, I think it came to be called.
And the second one was for the first time, a questioning of the value of open source software, or as they called it, models with open weights. And the reason that that was such a shock to, I think, many people who had been involved, know, Martin is amongst them, but I think Marc as well, was involved in earlier debates around the regulation of the internet, regulation of software, regulation of encryption.
And what I think was a skepticism or at least a perceived skepticism that the way that we had regulated software before, which was really to focus on regulating use cases as opposed to regulating underlying software development. And in the case of AI, that’s regulating model layer development. And that’s one of the reasons we became so actively involved because that distinction that it served the country so well in terms of regulating uses, it has a long history in regulatory law. We have typically regulated behaviors, human behaviors, and bad behaviors typically, as opposed to regulating invention, creation, and the development of things. And I think to go down that path raises some real problems in terms of trying to develop new innovative industries without really solving the fundamental problem of bad actors using technologies to do bad things, which is, think, the thing that people worry about.
And the way that I would analogize this is if you think about what one of the biggest risks that we face in computing today, it’s malware, it’s cyber criminals, it’s bad actors. And think about the way that we address that. Today, the creation of malware itself isn’t in fact a crime. What’s a crime is the transmission of software to compromise other computers. And the reason is very simple in that a lot of the techniques, a lot of the coding that goes into creating malware can be used for pen testing, right? It hardens our system. It can be used for other things. And so it’s very hard to distinguish at the programming layer, at the model layer, good uses from bad uses. But we can attack bad uses themselves. And that’s what the Computer Fraud and Abuse Act does. And it’s what we’ve historically done in this space. And it’s served us well. There are still bad actors that get through, but we’ve been able to address those concerns by focusing on bad activity.
Martin will get into this more because he’s our guru on most of these things. But it’s even harder here where my understanding is that the coding involved here is actually relatively simple. Relatively not for me, but for folks like Martin. The math here is relatively simple. Again, not for a lawyer, but for people who are in math. It’s vectors and linear algebra. And so to try to regulate that in the United States is kind of a fool’s errand because it will be developed elsewhere and to the detriment of the ability of our industry to innovate. And it won’t fundamentally alter the ability of bad people to use these things. And so I think this focus is a really, really important one. And we shouldn’t lose sight on the fact that regulating models is not a particularly effective way of achieving what we want to achieve. And at the same time, it really hampers innovation. And so you want to do something that’s effective, that achieves the ends you want to achieve. And I think that’s where we should be heading. But I’ll let Martin speak a little bit more to why that’s the case with this particular technology, because I think there’s some really interesting things that people would love to learn.
Matt Perault (07:12)
Martin, I mean, this is a way to approach regulation that’s not just a little appealing to policymakers, it’s really appealing to policymakers. Like the laws that we are weighing in on often regulate development. They very rarely regulate use. And so if I can try to channel my inner policymaker and give voice to, think what they have in mind, I think the idea is we regulate cars at the state and federal level. We regulate the airline industry at the federal level. You understand software and software regulation really deeply. Why is that model of regulation not the right one when it comes to AI?
Martin Casado (07:51)
So I think that there’s two considerations I want to put out there, given the last couple of years. The first one is when we got involved in policy efforts, the conversation was so lopsided. So this isn’t directly answering your question. I’ll get to your question in a second. But it’s really important to point out that software regulation discourse goes back since the beginning of computers. We’ve dealt with some really heavy stuff, like can you use compute to make nuclear weapons?
What is the implication of like the internet, right? We’ve dealt with some really heavy stuff when we’ve got a lot of policy as a result of that, and a lot of doctrines and principles around that. And the weird thing about the conversation, at least last year in the last two years is normally you’ve got this kind of robust discourse with many voices represented, for example, academia has historically been pro innovation and pro research. Venture capital has been pro innovation and supporting Little Tech. And then of course you have policymakers, you have big companies. They have this very robust conversation. And then as a result of that, you kind of figure out what makes sense for everybody. And the strange thing about this conversation is when we entered it, like once I just was not represented at all, like academia was basically silent.
VCs were doing this very strange thing that they’re kind of anti-innovation, which I’ve actually never seen before. And so just to begin with, I want us to all realize that where we are right now has not been the result of a robust conversation. It’s been very one sided. We can talk about why that was, like, and so a lot of what we’re going to say here is just being like, we need all the voices in the room. We need to get back to equilibrium. This is how we’ve done everything in the past, right?
So I’m gonna talk directly to your question. just wanna make sure like, you know, everybody understands a lot of the reason we got involved is to be like, hey, wait, like maybe academia should be part of this conversation. Maybe a Little Tech should be part of this conversation. VCs which have historically been pro innovation, maybe more of us should go ahead and talk.
So the second question direct into your question is, so how have we handled these conversations in the past?
Well, first, the underlying doctrine has been, we’ve been regulating software for decades and we have these platform shifts and to build effective policy, we have to understand the marginal differences. That’s been the way that we’ve done in the past. I’ll give you an example. When the internet came up, we actually had a number of big, huge shifts that happened. We had attacks that we’d never seen before of some of the underlying critical infrastructure. We even had at a nation state level, this notion of asymmetry, which means that the more that you rely on it, the more vulnerable you were, right? And so we had this entire discourse. So the discourse was not, stop working on the internet. It was, let’s understand the marginal risk. And then based on that marginal risk, let’s go ahead and come up with policy. And the reason you want to do it that way is because you trust the policy work that you’ve done to date. You trust that it still applies. You trust that there’s still these computer systems.
And if you don’t understand the marginal risk, you actually can’t come up with effective policy. You’re like, it may not work. It may be for the wrong things. It may be counterproductive.
And so there’s been this very strange almost doctrine shift in AI, where you go to experts like Dawn Song at Berkeley. And you’re like, what is the marginal risk? Like what has changed here? And Dawn Song would say, we don’t know, that’s a research problem. Well, if you don’t know that and you do regulation, like how do you know it’s even catching the bad stuff when you haven’t defined it? Or, how do you know it’s not enabling even worse stuff? And certainly regulation will potentially help her put a chill on innovation. So you don’t even understand where that trade-off goes. And so, you know, I would say we have a long-standing approach to dealing with software regulation. Historically, we’ve used that. And then as new things happen, we build policy on that.
What you’re asking is a little bit different what you’re saying about like, because I would say actually we do regulate compute a lot. You put a computer in an airplane, like clearly like there’s some regulations. So you do add regulations depending on the deployment environment. But when it comes to net new regulations dealing with very specific risks, you have to understand the risks first. And this was part of the conversation that was missing.
Matt Perault (12:31)
I’m curious how we draw the line, and this is something, Martin, that you’re sort of getting at here, like how we think about whether certain things fall on the use side of the spectrum or on the development side of the spectrum. So like, what about developers that offer applications? OpenAI builds a model and then has ChatGPT. Can you walk through how you think about regulate development, don’t regulate development, regulate use when it’s mapped on to companies that do both?
Martin Casado (13:05)
Yeah, we’ve got to be a little bit clear on like this entire conversation. There’s development, there’s use, and then there’s risks, right? So clearly, if someone came up with a methodology that was damaging and you could show that was damaging, and you could say that the actual methodology was part of the development…you’d say, don’t do that.
Matt Perault (13:36)
Yeah, fraud GPT.
Martin Casado (13:39)
Even that would be use. Let’s imagine that you could develop something that you know you couldn’t contain. We just decided that in this case, you could actually show that there is a risk that doesn’t exist with today’s computer system. We have not shown that at all, not at all. But if you did that, then you may have the conversation of saying, hey, listen, this is an entirely new thing.
But today, these are just computer systems and we have a very robust sense of regulation on top of those, right? So until that happens, there’s no point in regulating the development. Now, on the other hand, to your point, we apply these to all sorts of stuff and people use computers for all sorts of stuff. And if those happen to be illegal, then it’s very important that they obey the laws for those.
Jai Ramaswamy (14:35)
Yeah. Martin, is what you’re getting at that, and we should just cut to the chase here, colloquially things have been sort of divided into the doomers who think that we’re on the cusp of artificial and general intelligence that’s going to create such a radically new type of computing persona that it raises existential risks. And then there’s the people who are engineering these systems, many of whom are saying what we have today, not only doesn’t approach AGI, but it’s not actually going to get to AGI. And there seems to be a divide in the conversation amongst people who believe those two things, or at least profess to believe those two things. And what I hear you saying that if we were creating, and we knew we were creating Skynet, that would be one set of conversations. But where the engineering is today, actually the emerging consensus is that that’s not what we’re creating. There are enormous engineering challenges to get there. The current versions of what we have aren’t going to get there. I think Yann LeCun mentioned that recently.
Martin Casado (15:47)
So let’s tie this entire conversation together. Maybe this is the synopsis. We can all agree you should not use computers to do illegal things. So it’s totally sensible to regulate the use of computers.
That said, the building of systems, maybe you do want to regulate. Maybe you do. But in order to know even what to regulate, you would have to understand the marginal differences. Otherwise, you couldn’t even come up with a sensible policy, right? Like, what would you even describe? And that is still an open research question, right? So, this kind of summarizes the last two points that we’ve been making. It kind of unifies these two things.
And so what have we done historically as an industry? As an industry, what we do is we develop, we study the marginal risk. We’ve got a whole deep discipline in cybersecurity. If we come up with marginal risks that are different, then we implement policies around those. But right now, we don’t have that.
In fact, SB 1047 in California, which we got very involved in, was trying to regulate large models. The conclusion of which was to create an independent body for, and actually Dawn Song does such a great job because she is a, you’d even put her in the maybe “Doomer Curious” faction. So the Doomer-Curious Dawn Song professor at Berkeley, long time security researcher says, we need evidence-based policies. Otherwise we don’t know what to regulate. The question of marginal risk is still an open research question. Again, this is a world export, so let’s focus on the marginal risk. And that’s really where we are, I think, in the consensus discourse amongst many of the experts. But that’s not what percolates up to the policy level.
Matt Perault (17:54)
I really like your line about trusting the policy process to date, because I actually think that’s where a lot of the disagreement is. People who have comfort that existing law will at least provide some ways to address risk and marginal risk and then people who don’t. And then also I think there’s some part of this community who have said like, we tried the wait and see, let’s look for research in social media and that didn’t work. And so I’m curious how you think about both of those components, like.
What gives you confidence about the policy process to date? And Jai I’d love your thoughts on that as well. And then also like when people say, let’s not repeat the mistakes of social media. What are your thoughts on that?
Martin Casado (18:33)
I just want to make sure that we pose the question correctly because this one is going to be very easy to conflate, which is, there was no innovation for social media.
I mean, this is literally like the internet and then people, there was a use and that use was social media. So in the context of this question, would you give up the entire internet, rather than regulate social media? If it turns out social media is bad. I think the broad consensus is social media may be bad for minors. Maybe we need protections there. Social media may be bad for different political systems. Maybe we need protections there. But that to me is very much a use of the internet.
I don’t know, at least I don’t know anybody who’s sensible who is like, we should never have created the internet because of social media. I think that would be a vastly minority opinion, so I just wanna make sure that we’re not answering the wrong question here.
Jai Ramaswamy (19:25)
And Martin, of riffing off what you said before, I think that’s a really good example, Matt, where bring yourself back to 1998 or even 2000, or even when, Facebook, the college version launched. What would you regulate? Because everything that’s happened since then was just a glint in the eye of all of these people. Nobody had any conception of what social media was going to become.
I think to Martin’s point, one, to claim that at the outset of the internet, you could have foreseen how social media would develop, be used and misused is kind of a fairy tale. Like that couldn’t have happened back then. It can only happen once the risks emerge and are known, and then you can figure out what the bad things are that you wanna regulate. And then you can have a robust conversation amongst different stakeholders.
Martin Casado (20:23)
And you could even say, like, listen, we were not aggressive enough. But again, that is about the use, right? Nobody is being like, you should have not done internet research. Nobody’s saying that. But that’s what we’re saying now. We’re like, you should not do the core research. They’re not talking about, like, the actual use and application of these things. And so I think anybody that thinks we got it wrong in social networking is drawing the wrong parallel.
Matt Perault (20:50)
So Jai, what about the enforcement tools side? You amongst the three of us are the only one who’s prosecuted cases, who’s put together cases, who’s looked to the legal arsenal that you might be able to utilize to try to address harms. What gives you, to use Martin’s phrasing, what gives you confidence and trust in the policy process that we’ve had to date?
Jai Ramaswamy (21:12)
I think that it’s just the fact that we become smarter at these technologies as we understand how they’re used. Going back to the early days when the justice department was trying to fight cyber crime, you know, there was a particular unit within the FBI through which all kind of forensic data and forensic tools funneled when you seized a hard drive.
And these units were few and far between, but as cyber crime became prolific in all types of crime, the tools that investigators had, and this was on the criminal side, we can talk about regulatory separately, but on the criminal side, the tools that every investigator had, had to expand. And so today you’re probably not a white collar investigator or a white collar agent of any sort, unless you’ve got forensic tools and some sort of cyber experience in your background.
And we know now a lot better how to investigate these cases, how to find perpetrators. The regulatory side, similarly, we know that in the early days, there was a lot of fear of encryption, right? And Martin, I think lived through these debates at the time.
Law enforcement was implacably opposed to encryption, wanted back doors built into it. And all the technologists said, that’s a mistake because backdoors aren’t just used by good guys. They’re used by bad guys. and not to mention that the internet is fairly porous. You’re going to need encryption if you want to do things like e-commerce or send private information over the internet. and, and that has borne out like e-commerce on the internet would be impossible without robust encryption today.
And what solved the debate in some senses was PGP to a certain extent, once it became prevalent and useful and people could actually understand it. Has the encryption problem gone away? No, there’s still conflicts between law enforcement and tech companies, but we figured out ways of navigating through this not perfect, but ways that don’t hamper innovation that foster that don’t kind of throw the internet out just because a lot of bad stuff happens on it. And we’ve learned to coexist with these tools through that.
So that experience gives me some confidence that we can work our way through this. Is it going to be messy? Are there going to be risks associated with this? There are, and there always are risks, and I don’t mean to minimize the risks. But you have to deal with those risks as they actualize themselves, as they emerge, because the ability of the human mind to conceive of those risks in a vacuum and what you concretely do about those risks is very difficult. It’s hard to know how you would actually manage those things without understanding what the implications of them are. And I think we’ve done that in various ways.
Martin Casado (24:11)
Yeah, I think this is exactly right. And I think a good way to metaframe what Jai just said, which I totally agree with, is, we’ve hit this equilibrium, where we’re balancing a lot of things, like what good guys can do versus what bad guys can do, right? Like this is an equilibrium state, innovation versus safety. This is an equilibrium state. And we’ve developed, a number of policies to do this equilibrium state. And within that equilibrium state there’s a lot of work we can do. Like we may decide certain applications of AI are bad. Like there’s a lot of stuff we can do in equilibrium state.
The question is, do we know enough to change that equilibrium state? Do we know enough to handicap the good guys when the bad people have access to technology? Do we have enough knowledge to handicap medical innovation for some precautionary concern?
I think that the reason that we’re banging the drum and we have been is not that we have some ideological bent one way or the other. I think the reason is, there’s this equilibrium that balances a lot of trade-offs and you should consider all of these trade-offs. And if you’re going to change that equilibrium that could impact innovation, that could impact medicine, that could impact our ability to defend ourselves, you need really good justification.
And a good justification is not, well, maybe it’s dangerous, bcause this harkens back to the precautionary principle, which has just not worked for innovation, certainly in modern times. It really is an ideological or methodological concern, not an issue-specific concern that we have.
Jai Ramaswamy (25:46)
Yes. And there are two data points that are really important. One is, look, who was the first out of the box with AI regulations? It was Europe, right? They put something in place that has had a hugely detrimental impact. And they’ve had to walk a lot of it back.
The EU just recently came out with a recognition that the AI framework is flawed and they need to walk it back. Now, I don’t think they’re going to repeal every single regulation on the books, but it shows the dangers of moving into this world. And now the thing that they’re most concerned about is that the US and China will be producing all the models and Europe won’t and their people won’t even have access to some of those things.
Martin Casado (26:52)
I think it’s exactly right. You look at how the equilibrium is, we have played with equilibrium. We have chilled development. We’ve had a different approach. And I think the net of it in retrospect is we’ve given China a head start. And now if you look at the use of open source AI models, just as one sector, I’m not saying open source is particularly good about it. Just look at that as one indicative sector, they are currently dominant. And so I just feel like we have lost the ability to consider all implications of this. And that’s what we’re just trying to get a robust conversation back in.
Jai Ramaswamy (27:33)
Yeah, Martin, can I ask you a question on that? I’d love to follow up on that question, the open source question, because it seems to me that, you know, if you wanted to take a skeptical view of this, you could be like, well, you know, China was just better at this stuff. It doesn’t really have to do with regulation. But I think you’ve been seeing what happens on the ground with researchers when they’re faced with this kind of uncertain regulatory environment and what it does.
So could you kind of put some concreteness behind that. As we’ve spoken on other podcasts, we have, you know, 1200 bills in the States. We had a Biden EO that existed, no longer exists, but we have now federal legislation that may come into bear and there’s a bunch of uncertainty. So how does that play into people who are developing this software and how does it lead to a chilling effect? Because I just think that some sort of concreteness around that would be helpful for people to understand.
Martin Casado (28:31)
Yeah, and I listen, I don’t want to trivialize a situation. These are incredibly complex. They’re multifarious. They include a lot of stuff, right? And so like, you know, and I think to try and reduce it to, it’s just regulation is incorrect, right? But I do think it matters.
So let me start by saying the leader in AI is the United States, the United States closed source models from OpenAI and Anthropic and Google and the rest are the best in the world and nobody’s even close, right? But, you know, closed source or proprietary systems tend to be the primary capital drivers as far as business, but they’re not the primary innovation drivers for hobbyists, researchers, academics. A lot of the industry tends to get pushed forward from open source and that ends up becoming the future, right? We saw this with operating systems, Linux very famously. So it’s a very, very important part of the ecosystem.
Matt Perault (29:24)
So can you give us a sense of like, what’s the impact of regulation on open source development? Open source seemed to be under threat a couple years ago, then there’s this DeepSeek moment and everyone sort of backed off.
Martin Casado (29:36)
It’s really weird because the people talking against open source, were not the people you’d expect. It’s like researchers…these have always been the champions of open source…VCs. And so it’s been a very strange time.
So here’s the thing. The United States is the lead in AI by far. Like we have the best models, have the best proprietary models, we have the best [models], but they tend to be proprietary, right? And China has done a great job with open source, which you can actually expect from a number two, right? This is the classic case in technology is like the leader does the proprietary version. You can run faster that way. You can verticalize better that way. And then the number two, well, we’ll do the open source as an approach.
The question is, why have we not caught up or why don’t we have anything right now? China’s running away with open source models. I think a lot of it has to do with regulations and legal action due to things like copyright. So for example, when you do see these big labs release open source models and they’re the ones that would release the best ones. A lot of the time you could tell they were generated with synthetic data. And why would they do that? Well, I would surmise that the reason you do that is because you don’t want somebody… and we have a lot of spurious lawsuits like this…just basically trying to pull out proprietary images and then suing the company. So there’s actually a lot of risks to putting out open source models if you’re a US company. And so I think we’re already seeing a bit of a chilling effect based on the uncertainty.
And I would say more than any given rule, the uncertainty of where the landscape is going to go has had a chilling effect on our big labs and our big companies, which all are the leading organizations in the world from putting out open source models. So I think that’s very, very clear at this point. So instead they stay proprietary. So we’re still in the lead. But the problem is this open source is always a critical part of the innovation ecosystem. Because while it’s not the number one business driver, like the proprietary models are, it’s what’s used by hobbyists, it’s what’s used by academics, it’s used by startups. And that tends to be the future. And so this uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong, which is very, very clear. And as a result, the next generation, the hobbyists and the academics are using Chinese models. And I think that’s actually a very dangerous situation for the United States to be in.
Matt Perault (32:06)
Can you talk a little bit about the timeline for when you see the effects of this? One thing that I feel like we often hear in tech policy is people will say like such and such happened and the sky didn’t fal.
Martin Casado (32:20)
So my job is to sit and companies come in and they pitch us the next generation of companies. And of those, one in 10 is going to do a great job and one in 20 is going to be a very successful company and maybe one in 50 is going to be a public company. So this is a view into the future. And if they’re using an open source model, I would say 80% of the time it is a Chinese model. And listen, we can opine on why that’s dangerous. mean, here’s a very simplistic description without going into sci fi or any sort of, you know, scary instance. You know, if I’m a Chinese company that’s providing open source models, and I want to have an advantage, I just keep the largest model and give myself a six month advantage with it before I release it again. And everybody’s dependent now on you. And you have a built in headstart for the most powerful models as they come out. It’s just so trivial to see why this is a bad idea for the United States and a bad idea for the ecosystem.
And so I will say today it is absolutely the case that they are dominant for open source models, just open source models, but that’s such an important piece. The advantage again, without going into like scariness about backdoors or any of the other stuff, literally just release cadence. They can put United States startups behind.
Jai Ramaswamy (33:34)
There’s a political component too that we just have to be aware of, which is that it matters very much, not just for the dominance of a particular country or a particular set of industries that the U.S. leads in this area. If you think about the implications that open source had more broadly for soft power, for open societies, I think of AI as that on steroids, because it operates as sort of a control layer to computing and will be the interface that human beings interact with computers going forward, it can shape the way that information is produced and shape the way that information is perceived. And so I do think that it matters very much whether the advantage goes to China and models that are mbued with its values versus a much more kind of open view of information and the way information should be kind of shared and disseminated. And look, it’s a trivial example, but I think an important one in terms of the Tiananmen Square and how that’s represented in Chinese models versus elsewhere. And I think that you can see that playing its way out over the decades as AI becomes more important. So there’s a geopolitical angle here that the US really should be taking very, very seriously. And in some ways, China’s playing, it feels to me, at least a different game and a game that they played with Huawei, with others, which is they understand that adoption is the thing that matters, right? If you want to create network effects, if you want to create dominance, adoption matters and open source is one way of achieving that adoption.
And while we’re bickering about, you know, what States should regulate, whether the federal government should do this or that…they’re off achieving a level of penetration and a level of adoption that puts them ahead. And even if they’re not the best models, but the best models that people want to use, it’s kind of like the beta max versus VHS, you know, it doesn’t matter in some senses. The best model is one that’s cheap enough that people can use and can put into their products. And that will have implications down the road. And I think that that’s a super important risk that’s sometimes underestimated as we talk about the other risks.
And so to Martin’s point, policymaking is a trade off always. There are always puts and takes. And, you know, as somebody who was sort of studied economics early in my, my academic career, it tells you that there are no perfect solutions. There are just equilibria of trade-offs. So
I think robust debate is the important thing. And I do think that there are large portions of this that aren’t being debated enough in furtherance of risks that feel at least more remote than the risks, the prosaic risks that we’re seeing happening every day and that we should be addressing more urgently. And this isn’t a call for no regulation. I think that’s the canard that’s often thrown around, but what’s going to be effective and what’s the right balance between different competing objectives?
Martin Casado (36:56)
Yeah, this is the perfect example of why the equilibrium is so hard. And for even the smartest people, it doesn’t follow intuition or reason, and therefore we need to focus more on precedents, right? So there’s this question, should we focus on precedents? This is a great example. So historically, we would not be anti-open source, historically, at least for like, you know, a large portion of the constituency.
And some very, very smart, you know, knowledgeable people were anti-open source in this wave. And they would write articles and they’d say, hey, listen, because if we do open source, we will enable China. So it’s a departure from historical precedents. The rationale, which appeals to intuition, is that we’re enabling China. And the result is, is counterproductive. China actually got ahead. Right. And so I think when we have these new platform shifts with new technologies, we should follow what we’ve done in the past. And then if we’re going to have a departure, we should really understand why it’s different in order to do that because these equilibriums are so tough to understand outside of.
Matt Perault (38:06)
So why does all this matter for Little Tech? That’s the focus for us. So when we talk about regulating use versus regulating development, what is the reason that development focused regulation and the kinds of stuff we’ve seen is like Flops-based thresholds or impact assessments or audits or determining liability, kinds of of proposals that really like affect people at the stage of the company development that Martin you see so much like people trying to get.
Martin Casado (38:33)
I mean, the bottom line is uncertainty really is death in startups. And we see it all the time. So for example, in the last two weeks, I actually had a very AI forward VC pull a term sheet from a startup, which is probably going to kill the startup because they’re just uncertain about the regulatory environment. And so I think it’s not the regulation yet, even that’s problematic.
The problem is, because we don’t know what to regulate, because we don’t know what the marginal risk is, there’s just a bunch of proposals out there. We don’t even know where they’re going to land. We don’t know how to think about it. We have no framework. And that has really chilled, certainly the funding environment. But there’s also the truth for the customer adoption environment. It’s true for the hiring environment, right? I will tell you, I hire engineers all the time into AI startups. The regulatory question comes up all of the time. There’s kind of no aspect of a startup that isn’t impacted by uncertainty in the regulatory environment. It’s exactly the environment we’re providing.
Jai Ramaswamy (39:38)
I think that the important thing from a startup’s perspective is that they focus on building things, right? Like the thing that the startup is most worried about is not existing tomorrow. And obviously startups don’t want to break the law. So they’re going to follow whatever laws are on the books. The problem becomes what the opportunity cost of that is. The reality is that if you’re a large company and have resources and are well-funded and can deploy armies of lawyers, your innovation will slow down, but it’s not going to stop. And if you’re a startup and you’re just a couple of people, it becomes very, very hard to operate in an environment that has layers and layers of regulations, different regulations with different states layered on top of uncertainty at the federal and state level. You’re sort of left with, okay, what can I do and what can’t I do when all I want to do is build?
And that really does give an enormous advantage to incumbent players that are well-resourced, that have revenue streams, that have adjacent products where they already have a product market-fit. If you’re Google, you’ve got your advertising cash cow. So, you can do whatever you need to do to invest in this space. But if you don’t have anything and you’re trying to create an industry from scratch, to me, at least that seems to speak for itself in terms of how those two types of companies would fare in a regulatory environment that’s uncertain and that’s prohibitive or at least dissuasive of innovation.
Matt Perault (41:13)
Jai you’re articulating the things that we should stay away from, regulating development, regulating the underlying innovation. When you’re sitting across from a lawmaker who says, I’m hearing from constituents that they’re concerned about this technology, I feel an urgent need to do something to protect my constituents. What are the kinds of things that you say in response?
Jai Ramaswamy (41:32)
I think it’s a two-fold inquiry. think the first thing to really explain, there are a lot of regulations and laws that already apply to AI. There are general purpose, use restrictions on all kinds of bad activity, and you don’t get a pass from them just because you happen to be using AI now.
And candidly, if you were to create a very specific LLM-focused model of abuse, in about a generation of models, it’s going to be irrelevant because the definition won’t apply to the world models that are now being produced or whatever is going to come after that. And so I do think that the response is to say, look, let’s figure out what is prohibited today and what you’re afraid of. In Martin’s language, the marginal risk, right? What is not covered today by the laws that we have in the books? You can’t discriminate using models. You can’t do unfair lending using models. can’t...you know, differentially higher. And we’ve seen examples where hiring was affected by these models and they were pulled back. All of those laws still apply and you don’t get a pass just because you have an AI model that’s being used to implement your practice. So that’s the number one thing. And so where are the gaps in those laws? And let’s identify those gaps. And there may very well be them, but let’s see what they are. And then sure, you should pass laws.
And this is the key part of it. You should pass laws of general applicability that are technology neutral so they don’t become obsolete pretty quickly, especially with a technology that’s innovating so rapidly. You want to have a general use, tech-neutral types of laws. Let’s figure out what those gaps are and let’s pass those laws. And those will tend to be use-based laws for the most part.
And again, this kind of gets back into a question of federal state relationships that I know you and I have written about. Some of those are gonna be passed by the states. States have plenary police powers to protect their citizens for consumer protection purposes, for all sorts of purposes. Some of that will be federal and that balance between what the federal government should do and what the state government should do will be sort of hashed out. But you’ve got to focus on where the gaps are in the law as opposed to assuming that, you know, I got to run, I got to do something, I got to pass something, which might end up confusing things more than anything else. I think that’s the key is, what is not covered today that you want to be covered and how do you put a framework in place that, that addresses the bad uses, the misuses of, of these things.
Martin Casado (44:12)
I think the last thing that is really important to impress on lawmakers is if we focus on development and we don’t focus on use, you end up introducing tremendous loopholes ⁓ because it requires you to describe the system that’s being developed. And right now there actually is no single definition for AI. And everyone we’ve used in the path now looks totally silly because it’s evolving so quickly.
So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things. And so it’s kind of a fool’s errand, even on the surface of it, to try and do it based on a set of actions, because it is just not a single system.
Matt Perault (44:57)
Jai, Martin, thanks so much.
Jai Ramaswamy (44:59)
It’s been a pleasure. Thanks, Matt.
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.














