<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[a16z AI Policy Brief: Conversations]]></title><description><![CDATA[Exploring the most important issues and questions in public policy for keeping the US competitive in AI.]]></description><link>https://a16zpolicy.substack.com/s/conversations</link><generator>Substack</generator><lastBuildDate>Tue, 21 Apr 2026 07:21:41 GMT</lastBuildDate><atom:link href="https://a16zpolicy.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[a16z]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[a16zpolicy@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[a16zpolicy@substack.com]]></itunes:email><itunes:name><![CDATA[a16z]]></itunes:name></itunes:owner><itunes:author><![CDATA[a16z]]></itunes:author><googleplay:owner><![CDATA[a16zpolicy@substack.com]]></googleplay:owner><googleplay:email><![CDATA[a16zpolicy@substack.com]]></googleplay:email><googleplay:author><![CDATA[a16z]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Open Models, Measurable Safeguards]]></title><description><![CDATA[A conversation with Black Forest Labs on building frontier visual AI, proving openness and responsibility can coexist, and why policy should support both.]]></description><link>https://a16zpolicy.substack.com/p/open-models-measurable-safeguards</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/open-models-measurable-safeguards</guid><dc:creator><![CDATA[Matt Perault]]></dc:creator><pubDate>Thu, 16 Apr 2026 13:40:45 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194207294/0a3903ce26aca38655abb9164c8a970e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://bfl.ai/">Black Forest Labs</a> has established itself as a pioneer in visual intelligence, with its open-weight FLUX models reaching over 50 million downloads on Hugging Face and rivaling models from Google, OpenAI, and DeepSeek in developer adoption. The company has distinguished itself not only through technical capability, but through a strong commitment to open research. Black Forest Labs <a href="https://bfl.ai/blog/capable-open-and-safe-combating-ai-misuse">recently published </a>comprehensive safety evaluations showing its latest FLUX.2 model family demonstrated more than 10 times fewer vulnerabilities for serious risks than other leading open-weight image models, and its post-training mitigations reduced vulnerabilities by up to 98%.</p><p>In this conversation, Black Forest Labs&#8217; Adam Chen and Ben Brooks, who lead the company&#8217;s legal and policy work, join Matt Perault to discuss what it means to build frontier visual AI openly. They explain the role of open models in advancing transparency, driving down the cost of innovation for developers, and strengthening security and sovereignty by reducing the world&#8217;s reliance on a handful of closed APIs. They also outline the unique policy challenges facing open-weight model developers.</p><p>For policymakers, their message is clear: supporting open innovation does not require abandoning oversight. It requires targeted rules, analysis of where harms arise, and a better understanding of how proposed regulations land on smaller frontier labs, not just the largest incumbents.</p><p>The conversation also offers a window into what it looks like to build a policy function at a startup. With a legal team of four and an even smaller policy operation, Black Forest Labs is navigating the same regulatory landscape as firms with thousands of employees on their legal and policy teams. Adam and Ben offer a candid view into how they enable their small team to have outsized impact, rather than trying to match Big Tech&#8217;s playbook.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;e0862b23-64c5-400c-b7c0-be0ccb145473&quot;,&quot;duration&quot;:null}"></div><p>Topics covered:</p><p>00:00: Intro</p><p>01:09: What is Black Forest Labs?</p><p>02:25: The makeup of a legal team at a frontier AI startup</p><p>06:26: The role of visual intelligence in the AI ecosystem</p><p>09:01: Core risks and baseline safeguards for visual models</p><p>09:46: Unique policy challenges of open-weight models</p><p>11:50: Restricting access to general-purpose technology should be a last resort</p><p>15:04: What&#8217;s at stake: open models as soft power and the China dynamic</p><p>19:19: BFL&#8217;s approach to being open and responsible</p><p>21:38: BFL&#8217;s model testing results</p><p>24:11: How a four-person legal team approaches disclosure and compliance</p><p>27:02: What works and what doesn&#8217;t in transparency proposals</p><p>30:19: Navigating the state, federal, and international patchwork as a startup</p><p>32:46: BFL&#8217;s advocacy goals </p><p>36:25: The Little Tech voice as a competitive advantage in the policy ecosystem</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Ben (00:04)</p><p>I think a lot of developers and a lot of policymakers think you&#8217;re either open or you&#8217;re responsible, you can&#8217;t be both at the same time. And what this data, what these evaluations help to show is that you can be both at the same time.</p><p>It&#8217;s really important for policymakers to understand that open models are a vehicle for projecting soft power abroad. And the values and the choices and the vulnerabilities encoded and embedded in these models are going to determine behavior of all sorts of real world systems for the next few decades.</p><p>Adam Chen (00:34)</p><p>We believe that a voice that can articulate a Little Tech point of view is important to ensuring that policymakers have a more robust understanding of the entire market and what regulations they may be contemplating, and what impact that could have on innovation and competition.</p><p>Ben (00:47)</p><p>Restricting access to general purpose technology should be a measure of last resort, not a measure of first resort.</p><p>Matt (01:00)</p><p>Welcome to the a16z AI Policy Brief.</p><p>Adam Chen (01:03)</p><p>Thank you, it&#8217;s great to be here.</p><p>Matt (01:05)</p><p>Adam, can you tell us a little bit about Black Forest Labs? What is the company and what are you guys doing?</p><p>Adam Chen (01:09)</p><p>Yeah, it&#8217;s my pleasure to do so. So BFL is a frontier research lab building the standard for visual intelligence. It was founded in Freiburg, Germany, so the heart of the Black Forest, hence the name Black Forest Labs. And really the founders were in university studying visual intelligence and at a certain period in time decided to kind of continue their work by building their own startup from Germany.</p><p>And so our mission is to build foundational technology in visual intelligence. And so what that will mean is essentially a high performance, high quality, lean, open and fast visual intelligence models. We do so, and we want to do this by building openly, by investing heavily in open research. And we ultimately believe that this will result in better gains for the company and for society as a whole.</p><p>As terms of myself, I am working the legal team here at Black Forest Labs, heading up the legal team. I joined about a year ago and I&#8217;m very excited for all of the work that the team has been building towards.</p><p>Matt (02:18)</p><p>Adam, you are referencing the legal team at Black Forest. I think sometimes when people hear legal team, they&#8217;re thinking of dozens or hundreds or thousands of lawyers. Can you give us a little bit of a sense of like, what does the legal team look like at Black Forest Labs?</p><p>Adam Chen (02:25)</p><p>Very small. So in terms of people who are acting as lawyers, it&#8217;s about four people. And that&#8217;s two on the commercial side and myself and another lawyer on the product counseling side. And so it&#8217;s very minimal, I think, in terms of the scale as compared to some of our competitors out there.</p><p>Matt (02:45)</p><p>And you&#8217;ve recently grown the function to include policy as well. Ben, you&#8217;ve recently joined in a full-time role. So Ben, are you doing at Black Forest?</p><p>Ben (02:57)</p><p>Yeah, I lead public policy and a lot of what we&#8217;re calling model assurance works. In other words, how do we validate the claims that we&#8217;re making about the risk in our models?</p><p>I&#8217;ve known the team for quite a while now in one form or another. I first reached out to the team when they were working on stable diffusion back in the day. We had a congresswoman who was writing to the White House saying we need export controls to stop the release of this technology. And that&#8217;s how I got involved.</p><p>And then most recently joined to lead our public policy work. Before that was a fellow at the Berkman Klein Center at Harvard. And before that, a whole lot of regulatory advocacy and some really spicy high stakes, usually commission based domains, right? From drones through to crypto, ride sharing and a whole bunch of stuff in between.</p><p>So for me, this has always been an exercise in how can we regulate in the public interest, but do so in a way that is maximally compatible with open innovation? And this is more important in AI than ever before. And this team has been at the forefront of showing the world that there is an open and capable future out there. It doesn&#8217;t just have to be closed source API. So it&#8217;s really exciting work for them for some time. And as Adam says, it&#8217;s a lean agile team but we&#8217;re punching above our weight in all ways that matter.</p><p>Matt (04:20)</p><p>So you guys have both done a range of different things in tech policy and law. Ben, maybe starting with you and then Adam, I&#8217;d love to hear your thoughts. Like why BFL? What is compelling to you about this opportunity versus Ben&#8217;s work in academia or other companies large and small that you might be at?</p><p>Ben (04:35)</p><p>Yeah, I mean, think it&#8217;s a combination of things. From the perspective of our team, having a team that is comprised of missionaries, not mercenaries, right, they deeply care about research. They care about moving the frontier in all the ways that matter to real world developers and deployers And they care about releasing this technology and this research openly. And to me, that was a huge draw card. And it sets them apart from a lot of the other players in this space.</p><p>I think there are also some interesting, fairly unique challenges that a team like BFL faces. We&#8217;re doing frontier work, which means that we have some of the most capable models out there. We&#8217;re doing it openly and open models have unique challenges and unique properties that we can maybe discuss today. And we&#8217;re a small team, right? So we&#8217;re up against the Googles of the world, but we don&#8217;t necessarily have the same ability to absorb risk.</p><p>And we need to be very careful about how we go about this work. We need to show the world that we develop this stuff thoughtfully And so, you know, a really compelling mission, a really important mission, a great team coupled with some very unique headwinds, I think makes this a really attractive place to continue to continue that mission with government and policymakers.</p><p>Adam Chen (05:49)</p><p>On my end, the team is exceptional here. Everybody is really, really talented at what they do, and they&#8217;re great people. And that was a really great draw for me. And I think, honestly, one of the most exciting things to do in law and policy right now is working on the frontier. And right now, AI is the frontier of both law, policy and also technology. And so this team is looking to push that frontier to expand the capabilities of what AI models can do. And that makes professionally a very exciting challenge.</p><p>Matt (06:26)</p><p>So I want to get into the specifics of the law and policy issues, but to get there, I think we need to understand a little bit more about the technology itself. So when you talk about building visual capability at the frontier, what does that actually mean in practice?</p><p>Ben (06:37)</p><p>Well, with visual intelligence, think about language models, right? Language is a really abstract and highly compressed representation of the world. And so we have models out there, fantastic models that can learn to manipulate text for a whole range of applications, right? Coding, search, everything in between. But their understanding of the world is fundamentally limited in important ways. And our team&#8217;s belief has always been that pixels encode a much richer representation of the world. If you think about us as people, vision is primarily how we perceive and understand and interact with the world around us. And so, you know, the bet that we&#8217;re making as a team is that visual AI is going to transform the world, not just in these very creative, highly expressive ways that we see today, but in more functional ways as well.</p><p>If you think about today&#8217;s sort of current crop of visual models, we have relatively narrow capabilities, but we still have some high impact applications, film, gaming, design, concepts in architecture. But where we&#8217;re moving towards is more of a unified visual intelligence architecture, where you have versatile models that can bring together perception of the visual world, simulation, visual reasoning across different modalities and do so in one scalable architecture. So in the short term, that means better simulation environments for agents, embodied systems that are using models to learn how to navigate complex environments. That&#8217;s really exciting.</p><p>I think long-term, this means potentially embedding these models in high-stakes systems themselves, right? The model isn&#8217;t just predicting or generating the next frame in a video sequence, but it&#8217;s actually outputting an action or a recommendation. And this is fundamentally how we&#8217;re going to unlock specialized robotics, web agents with computer use, embodied systems that can manipulate physical tools in really consequential environments. So for us, it&#8217;s not push a button, get an image which is where the technology is historically associated today. But long term, there are some really impactful, important functional applications of this stuff. And our team, as I say, is pushing the frontier there.</p><p>Matt (08:43)</p><p>Adam, when Ben was talking in his opening just about the nature of the work and he was talking about visual capability being compelling from a technology perspective, he was also saying it raises unique challenges in the regulatory context. What are those challenges and how are you guys responding to them?</p><p>Adam Chen (09:01)</p><p>I think at core base, there is some visual content that is just outright illegal in many jurisdictions, right? And so you&#8217;ll see that with CSAM, child sexual exploitation, abuse material, and non-consensual intimate imagery, NCII. And so one of the things we really try to do is to tackle those types of core risks that exist right now with visual models at a really baseline model layer and not to just kind of try to paper over it with some sort of like inference moderation layer itself.</p><p>Matt (09:35)</p><p>And the other policy issue that you were flagging in your opening was how open you&#8217;ve decided to be. So what is the feedback that you hear from the policy community about your decisions around openness?</p><p>Ben (09:46)</p><p>I think open models, as opposed to closed models, pose some unique challenges. You can modify them for purposes that were not anticipated by the original developer. You can run inference on them independently without moderation layer filters. And if there&#8217;s something wrong with the model, if there&#8217;s a vulnerability, it can be very difficult to withdraw that model from circulation once it&#8217;s out there on the internet on platforms like Hugging Face or GitHub. And so we&#8217;ve always been very sensitive to the challenges.</p><p>I think one of the difficulties is that when policymakers look at AI, they expect us to mitigate a lot of these risks in the model layer. And we&#8217;re everything from political deepfakes, sexual deepfakes, automated decision making, negligence, defective product design, the copyright issues. All of this is expected to be done at the model layer. And the challenge is that many of these risks are highly context dependent.</p><p>And it&#8217;s not necessarily possible to mitigate that in the model. The models are versatile. You can do many things with them. Most of it good, some of it bad. Those capabilities are often entangled in ways that can be difficult to tease apart. And then, if you embed a model with particular safeguards, you can, with enough data, compute and expertise, start to unwind some of those safeguards. So I think that mismatch is one of the biggest challenges we face with policymakers. The expectation that models are the silver bullet and we&#8217;re going to fix all of these challenges in the model. And the reality that actually the model is just one component in a complex integrated system, these risks are context dependent. And while there are many important safeguards we can put in place in the model, that isn&#8217;t the end of the story. And from a regulatory perspective, it&#8217;s really important that we capture some of that nuance in these future legislative and regulatory reforms.</p><p>Matt (11:37)</p><p>I assume when you raise that point with lawmakers, they say, okay, we understand that it&#8217;s difficult, but are you just expecting us to live with a world where all these harms are created? So what is your vision for what the right approach is from a policy perspective if you&#8217;re concerned about the harms, but you recognize some of the limitations of addressing that at the model layer?</p><p>Ben (11:29)</p><p>I think it&#8217;s a few things. I think one is just being very clear that restricting access to general purpose technology should be a measure of last resort, not a measure of first resort. And too often, I think policymakers are looking at those kinds of interventions as the first thing we should do and not the last thing we should do. You think about just the past couple of years, the number of bills we&#8217;ve seen around licensing model developers, around export controls for model weights, some really interesting, fairly exotic liability proposals that make life very difficult for open source researchers and developers, those kinds of reforms, we need to be very clear, have a huge impact on open innovation and they should be interventions of last resort. I think the second piece though is like, I would love to see governments run a systematic gap analysis of where our existing regulatory systems fall short. And so often we&#8217;re kind of legislating before we do that gap analysis.</p><p>And the truth is there are gaps. It was only last year that the US government introduced federal criminal liability for non-consensual intimate imagery. NCII is a huge proportion of the risks that policymakers and real families out there in the real world care about. But it was only last year that we plugged that gap at a federal level. I think the same is true on a range of other different AI systems too.</p><p>So we&#8217;d love to see more of a systematic gap analysis there, figure out across the supply chain from developers through to users and everyone in between. Where do our existing systems fall short and where can we introduce some targeted reforms to up level the confidence in the AI supply chain? So a lot of different things. can, can double click on any of them, but generally speaking, it&#8217;s really important that we make sure that whatever we do for whatever risk, it&#8217;s compatible with the culture of experimentation and open innovation that brought us here in the first place and is going to help to make AI useful and accessible in the future.</p><p>Adam Chen (13:38)</p><p>It&#8217;s also really critical to also remind ourselves why opening innovation really matters in this space, right? AI advances have in the past critically been all done out in the open. Neural nets were reinvented out in the open. The transformer paper was published readily out in the open. And being able to put this research and these advances and these models and be able to openly release them will allow broader innovation in society. It will allow us to advance science and allow many people to have broad access to critical technology. I think those are all positive things for society.</p><p>Ben (14:21)</p><p>If I could just add, I think a lot of folks forget that there&#8217;s a reason Linux is everywhere. There&#8217;s a reason Android powers our smartphones, right? Like we fundamentally understand the contours of the open source conversation. Like we know it&#8217;s important for transparency. You can inspect the technology. It&#8217;s important for competition. You can build on it without having to reinvent the wheel. And it&#8217;s important for security and privacy. You can fine tune AI models, run inference. You can build systems without all of that data going to an API in California. But I think there is a bit of a disconnect as Adam says between understanding the importance of open source in these previous tech cycles and understanding just how important open weights and open research have been to AI and will be in the future.</p><p>Matt (15:04)</p><p>Okay, I&#8217;d love to talk a little bit more about what&#8217;s at stake in thinking about open AI. Can you explain a little bit about what&#8217;s at stake here? What are other countries doing and why is it important that you guys are a strong competing force?</p><p>Ben (15:20)</p><p>Yes, the micro lens on this is, as with Linux, as with Android, developers care about having access to capable open alternatives to closed source technology. AI is going to be critical infrastructure across the economy. It&#8217;s transforming in the language model space how we access and interact with information. It&#8217;s transforming how we perceive and shape the visual world, visual AI. And we know that there is huge demand for capable open alternatives.</p><p>Our team has had 50 million downloads of our Flux open weight models on platforms like Hugging Face. The team collectively has released models of over 400 million downloads. These models rival Google, OpenAI, and we&#8217;re the only U.S. or European lab to rival DeepSeek by developer likes, to give you a sense for the size of this community. So there&#8217;s huge unmet demand out there.</p><p>And I think it reflects a few things. It reflects developers&#8217; need to be able to inspect the technology before they deploy it. They want to be able to build exciting new applications that we can&#8217;t imagine today and do so without having to spend millions or hundreds of millions of dollars training their own models from scratch. And it&#8217;s important for security and privacy and for sovereignty as a result. If folks want to fine tune, optimize, integrate these models and then deploy them into real world systems, they need to be able to do so without sharing all of that data with an API on the other side of the planet. So we know that this huge unmet demand is out there in the global developer community.</p><p>I think the challenge is that if all of these models are coming from one particular player or one particular country, well, they embed certain values. They embed certain design choices and certain vulnerabilities. You see this most clearly with the f&#252;hrer around Deepseek right? DeepSeek was actually, in many respects, a center left model that would talk about most issues much as you&#8217;d expect an OpenAI or an Anthropic model.</p><p>But in certain areas, it had been heavily censored under Beijing&#8217;s cybersecurity regulations. And so it wouldn&#8217;t talk about Uyghurs, wouldn&#8217;t talk about Hong Kong, Taiwan. So what I say to folks is, imagine a world where the next search platform can&#8217;t talk about Taiwan, or in visual space, if the next generative tools can&#8217;t parody Xi Jinping. These are some really serious weighty issues.</p><p>And we&#8217;re seeing this too in our evaluations of models. If you take a look at our NCII and CSAM evaluations, our team has shown that there are sensible safeguards that can be put in place to drastically reduce the risk of widespread misuse. We know that certain models and certain teams based in China are not necessarily putting many of these mitigations and these safeguards in place. And so what does that then mean for the health and the integrity of the content ecosystem?</p><p>So it&#8217;s really important for policymakers to understand that open models are in some sense a vehicle for projecting soft power abroad. And the values and the choices and the vulnerabilities encoded and embedded in these models are going to determine behavior of all sorts of real world systems for the next few decades.</p><p>Adam Chen (18:18)</p><p>And open models also overlay on top of something else that&#8217;s critical for BFL, which is open research. And we believe really strongly that open research leads to more innovation with an AI and leads to more scientific advances and leads to that innovation and advances in a very responsible, acceptable way.</p><p>And so we believe that if you don&#8217;t allow the models, you basically increase the cost of innovation by developers, you increase the cost of innovation. And that has real implications on a strategic level, at a national level, at a geopolitical level, all of which needs to be considered before we decide whether open is bad and closed is good.</p><p>Matt (18:56)</p><p>And then how do you respond to the concerns that policymakers raise about openness? So you&#8217;re talking about all the benefits of openness for the good guys. But I think what we often hear when we&#8217;re talking about the benefits of open sources, people will express concerns about, well, if it&#8217;s open for the good guys, it&#8217;s also open for bad actors. And so it can be misused in different ways. So Adam, when you get to that point in the conversation with lawmakers, what do you say?</p><p>Adam Chen (19:19)</p><p>I think there&#8217;s a lot that one can do to kind of make sure that models have an appropriate level of safety when they&#8217;re being released, right? A lot of the work that Ben has done at BFL has been setting what the right levels and putting in the right testing mechanisms in place so that robust evaluations are being done on whatever models that are placed in the broader market.</p><p>And I think having a clear understanding of what we&#8217;re testing for, why we&#8217;re testing for it, and how we&#8217;re evaluating all of these results is very critical and making sure that we&#8217;re addressing these concerns that policymakers have, but also is overall just a good thing to do.</p><p>Ben (20:01)</p><p>I think a big part of getting policymakers comfortable with open weights is also just explaining the reasonable safeguards that we can put in place today to help lower that risk. We can talk about what we do with our models in the visual space, but the bottom line is there are mitigations that we can embed in the model themselves to help reduce the risk of misuse and we think reduce the risk of malicious modification as well.</p><p>I think part and parcel of that is also explaining to policymakers that there is always going to be a residual risk. And we need to think about other ways to mitigate that residual risk across the supply chain. So if you think about deepfake content, for example, sexual deepfakes, NCI, synthetic CSAM this isn&#8217;t just something that can only be fixed in the model. We also have to think about how content is distributed downstream, who&#8217;s using it, and what tools law enforcement has to respond to these forms of misuse. So I think when we just, we&#8217;re very candid and we&#8217;re very thoughtful and open with government, whether it&#8217;s members of Congress or leadership in the EU Commission, about what we can do and the state of the possible, but also the fact that there is that residual risk and those residual vulnerabilities. I think generally the conversation is in a much better place. And it also means that any reforms are going to be much more precise, more targeted, and less likely to interfere with that spirit of innovation.</p><p>Matt (21:19)</p><p>One thing that&#8217;s really impressed me about your approach to these issues is you haven&#8217;t been just saying, we&#8217;re great at safety, and published pretty statements about how great you are on safety. You&#8217;ve actually published data. You&#8217;ve really tried, I think, to say, like, here&#8217;s how we perform on various metrics that are important. So can you talk a little bit about the research that you recently released?</p><p>Ben (21:38)</p><p>Yeah, there are certain risks that don&#8217;t require a lot of context. You know it when you see it. And it&#8217;s really important to mitigate this in the model itself, not just in APIs, applications, and content distribution platforms. And so two of those risks are synthetic non-consensual imagery sexual deepfakes and synthetic child sexual abuse material. It&#8217;s really important to us that we make it easier for developers and users to do the right thing with our models and at the same time make it harder for bad actors to potentially misuse or modify those models. And so one of the things we&#8217;ve been working on is how we can better quantify these vulnerabilities in our models and use that data to make an informed pre-release decision.</p><p>And so working with one of our partners, Cinder, we built up a very comprehensive evaluation and benchmarking process to compare how our proposed checkpoints, our proposed release candidates perform compared to models that we&#8217;ve released previously and compared to other powerful open-weight models that are out there in the market. And the findings were really promising. They showed that with our pre-training and our post-training mitigations in the model, there are more than 10 times fewer vulnerabilities than models released by big tech firms in China. We showed that the post-training mitigations alone can help to reduce those vulnerabilities by more than 90% in some cases.</p><p>And we showed that while there is that residual risk, that residual risk can be nearly eliminated through the application of some very industry standard moderation practices of the API and the application layer. So it&#8217;s really exciting for a few reasons, but chief among them is that we can show that you can be both open and responsible at the same time, right? I think a lot of developers in this world and a lot of policymakers think of them as mutually exclusive or you&#8217;re either open or you&#8217;re responsible, you can&#8217;t be both at the same time. And what this data, what these evaluations help to show is that while there&#8217;s still a lot of work to be done and there&#8217;s still a lot of scope for improvement, that you can be both at the same time. And that&#8217;s really important if we want to continue to release this technology openly long into the future.</p><p>Matt (23:44)</p><p>So there&#8217;s a lot of debate right now, I think, on how developers can publish information that&#8217;s actually useful for people, but not too burdensome for developers. And it seems like you think that you&#8217;ve found a nice balance here. Adam, you were describing a legal team of four people, obviously significantly smaller, more limited resources than larger companies. Why is this approach to performance evaluations and disclosures one that works for BFL?</p><p>Adam Chen (24:11)</p><p>I think a large part of it has to do with focus and making sure we&#8217;re focused on what we actually really need to solve. Just to pull back a bit to, BFL is really about trying to build that kind of foundational layer for visual intelligence in the market. And so our customers can be quite wide ranging that may utilize our models in very different contexts.</p><p>For example, we&#8217;ve had customers before who are trying to [build] bespoke children&#8217;s book stories using AI for parents to kind of create for their children, very interesting for the child&#8217;s stories through the use of AI images. On the other hand, you can also have video game developers who may want to create AI assets as inspiration for their final visual assets within a game, right? And so you can see there what you may want to permit from a violence or gore perspective will vary widely. And so from our perspective, given our focus on providing that more broader infrastructure layer for all sorts of clients, we&#8217;re focused on just removing the stuff that is clearly illegal, and then provide guardrails and moderation standards for our developers who will then be able to make it more bespoke for their customers.</p><p>Matt (25:30)</p><p>It seems like part of this is a way that you have developed a company culture, company identity around combating the idea that openness is unsafe. It seems like your way to flip that is to say, well, if we provide openness, but we also provide measurable mitigations, we evaluate ourselves. We try to encourage downstream norms. We can get to a better place where we raise the bar on these types of issues. Is that a fair characterization?</p><p>Ben (25:58)</p><p>Yeah, I&#8217;ll just say I think part of this exercise is also helping policymakers, civil society, wider public think about open technology in a different way. Again, it&#8217;s not access to open technology that is our primary choke point and our primary intervention. It is how do we mitigate these residual risks right across the ecosystem, knowing that that technology is out there. We&#8217;ve done this with open software. We&#8217;ve done this with the open internet.</p><p>And I think fundamentally the same is going to be true of AI. Now, when I say that, I don&#8217;t want to diminish the fact that there are some very acute, very concrete risks associated with this technology. The uplift with AI and with visual AIs is meaningful. It&#8217;s significant over the baseline in some areas. But again, I think we need to think in a more joined up way about how these mitigations come together right across the tech stack. And to Adam&#8217;s point around moderation, if we can make available the tools and the resources that downstream developers need to implement this technology safely, then that&#8217;s an important way of mitigating some of these risks as well.</p><p>Matt (27:02)</p><p>For lots of people who do a policy for living, you spend a lot of your day reviewing various different policy proposals, lots of bills at the state level, lots of bills at the federal level. You guys are an international company, strong European presence. So you&#8217;re reading bills internationally as well. Well, there&#8217;s a lot of discussion about what the right disclosure model is now for companies and particularly for small companies.</p><p>Just based on your behavior, you&#8217;re not saying no disclosure is the best course. What&#8217;s the delta between what you guys are doing now and what then you see when you&#8217;re reading through disclosure mandate proposals from various different governments?</p><p>Ben (27:44)</p><p>I&#8217;ll be clear, I think a lot of transparency proposals that I&#8217;ve seen are actually trending in the right direction. I think the devil&#8217;s in the details, right? There&#8217;s a lot of challenges and there are a lot of challenges that can disproportionately affect startups compared to Big Tech. As Adam says, we have a small compliance team. You go to Google and there are these compliance teams that are much larger.</p><p>But I think the proposals that have been most challenging have been on the mitigation side. So proposals that try to draw a line in the sand about acceptable risk. They don&#8217;t just say disclose what evaluations you&#8217;ve performed, but they try to actually stipulate what acceptable risk looks like. And the challenge with that approach is that you create a world where it&#8217;s relatively easy for a closed source developer to comply. You make it much more difficult for an open source or an open weight developer to comply. You&#8217;re not really comparing apples to apples, you&#8217;re comparing apples to oranges. And so most of the proposals that we&#8217;ve seen become challenging when they veer out of pure disclosure, pure transparency territory, and they start moving towards either overt or implicit restrictions on the release of the model itself.</p><p>A good example of this is actually California, right? I think if you look at where SB 1047 was back in the day, and you look at where SB 53 and some of these proposals are going in the future, the trajectory is the right one, which is don&#8217;t stipulate secondary liability from model developers in relation to some exotic and fuzzy ill-defined harms. But there may be a well-designed targeted intervention focused on disclosure of your risk assessments, disclosure of the evaluations that you&#8217;ve performed. And without endorsing one bill or the other I think that is the right direction.</p><p>Matt (29:28)</p><p>That&#8217;s helpful in terms of understanding on substance where you come down. I&#8217;m curious as well, just how you deal with the jurisdictional complexity. Ben, you mentioned California and I know you have written in the past about 1047 and various different transparency models there. Adam, again, I&#8217;m seized by this four person team image in my head. That&#8217;s a small team for navigating like a compliance framework in Europe, a compliance framework that&#8217;s being set for the United States and Washington and then navigating all various different state compliance frameworks, including California, which has passed a lot of AI legislation in the last couple of years. How does that, even if you&#8217;re seeing a positive direction of travel, how do you guys navigate that state, federal, international patchwork?</p><p>Adam Chen (30:19)</p><p>I think maybe just addressing this from a more operational level. Number one is to hire really great people. So I think we&#8217;re very fortunate to have people like Ben and others kind of join us and agree to kind of tackle this crazy hodgepodge and emerging AI regulations all at the international, federal, European state level that is quite complex and interlayered.</p><p>More specifically though, I think trying to figure out the trends in which you take one set of actions, let&#8217;s say, disclose one certain way on the transparency side, and allow you to meet regulations across the board that oftentimes will have disproportionate results as well, which will be great and something the team looks to do. I think, though, there&#8217;s probably a larger question of if there&#8217;s a lot of regulations that conflict with each other.</p><p>What do you actually do? And when it comes to that, we&#8217;ll take on those issues when they come up. But I think those are hard, hard questions, which the team is well-equipped to tackle, given the experience we have.</p><p>Ben (31:23)</p><p>I think as a small team, it&#8217;s really important to triage in a really disciplined way. We are not a big tech advocacy operation. We need to be very careful about which battles we pick and where we can have an impact. From our perspective, you know, we look at the world through the lens of how can we best promote and protect open innovation and digital intelligence.</p><p>Fundamentally, it means researchers must be able to share the technology widely. It means the developers must be able to build on the technology responsibly. And so when we boil the ocean on 1800 state and federal bills just this year alone, and we look at 700 pages of the AI act during the trial process, for example, and 100 pages of codes of practice, we look at it through that lens. We say, where is this going to set back that mission? We&#8217;ll talk with anyone anywhere to be clear in any jurisdiction.</p><p>But we&#8217;re not interested in photo ops and roundtables and naval gazing just for the sake of it. Everything we do from an advocacy perspective on law reform has to serve that overriding mission. And when you&#8217;re that disciplined, you&#8217;ll find a lot of work kind of falls away. There&#8217;s a lot of busy work out there in the policy world. And we&#8217;re, as I say, ruthlessly focused on what really matters to researchers and developers.</p><p>Adam Chen (32:34)</p><p>Where can we have outsize impact? That&#8217;s really fundamentally one of the central questions we always ask ourselves, whether it&#8217;s a policy team, legal team, and the research team, and engineering team, and so on.</p><p>Matt (32:46)</p><p>A year ago, Adam, you didn&#8217;t have a policy team and then you decided that you would start one and that you bring Ben in to lead it. Can you explain a little bit about that decision and why did you decide that this was the time that you needed to have a policy function?</p><p>Adam Chen (32:59)</p><p>I think one of the central things too is when you find exceptional talent like Ben is, then you bring them on board no matter what they do. I think also more practically, AI regulation is a central part of any foundation model lab. It will be something that any foundation model lab will have to tackle throughout its journey from a startup to a larger company or if at a Big Tech company and the AI lab was in them.</p><p>And I think we recognize that early on that this would be such a unique thing for BFL on its journey that kind of tackling the problem ahead of time would give us some outsize returns. I think we&#8217;re seeing good results there too, engaging with policymakers and regulators early on makes it so that we&#8217;re not just some strange random lab operating out of the middle of Germany that no one knows about and is looked at more suspiciously and more fearfully by regulators. Instead, there&#8217;s somebody that regulators and others can interact with, we can engage on these issues and we can try to work through thoughtful solutions.</p><p>Matt (34:05)</p><p>Ben, as the founding member, the lead of this policy team, how are you thinking about the function and potential growth of your team?</p><p>Ben (34:13)</p><p>I just don&#8217;t think that the Big Tech playbook translates very well into startups and potentially even into model development. We have to think differently about how we do advocacy. I and our growing team, I think we need to be fundamentally committed to the idea that meaningful public oversight is compatible with open innovation. Our job is to explain the upside of this technology to policymakers, help to mitigate the downside in concrete ways, and help policymakers to reconcile the two through any more legislative or regulatory reforms.</p><p>From my perspective, with the number of jurisdictions we have, the outsized impact that we have as a business, I think it&#8217;s really important that we have, as I say, relentlessly focus on meaningful lasting impact in the regulatory system. And we don&#8217;t just create noise for its own sake. I am interested in targeted reforms and helping policymakers to get there. I&#8217;m not interested in generating PDFs and having photo ops and staging roundtables just for the sake of it. And I think if you&#8217;re a small team, it&#8217;s really important to have that discipline. We don&#8217;t want to have the kind of hyper-specialization that I think you see in some of the larger organizations. We need to stay focused and we need to be very clear about what matters to us and our team and our community and what maybe doesn&#8217;t matter.</p><p>Adam Chen (35:25)</p><p>And the focus as well, or the point of view that we bring as a small startup is somewhat unique, I think, from the Big Tech policy team and what the few points of view that that can bring. And I think we can see some outsize influence there or some disproportionate impact from making those voices heard.</p><p>We believe that a voice that can articulate a Little Tech point of view is important to ensuring that policymakers have a more robust understanding of the entire market and what sort of regulations they may be contemplating, and what impact that could have on innovation and competition.</p><p>Matt (35:58)</p><p>Lots of people speak about what that point of view is, we have a Little Tech policy agenda. We&#8217;ve talked about it extensively. How would each of you characterize what the Little Tech perspective is? And another way to say that is kind of what&#8217;s the competitive advantage that you bring as a little tech player to the policy ecosystem? You&#8217;re going up against these policy teams that have hundreds of employees, sometimes thousands of employees. What is the little tech voice that you&#8217;re bringing to the table?</p><p>Ben (36:25)</p><p>I think the value of a small team punching above its weight is that we have the agility to take some really bold positions on critical issues or major reforms in a way that might be more difficult for a larger team. We can sit down more as a thought partner with policymakers and work through the menu of options. I think one of the challenges with larger organizations, especially those that have come out of the kind of traditional internet platform space, everything is so colored by that Section 230 view of the world.</p><p>We need to maintain immunity on the one hand, but don&#8217;t worry, we&#8217;ve got these sort of privatized regulators internally, these trust and safety teams who will take care of it for us. And I think that&#8217;s important for all sorts of reasons, but it&#8217;s fundamentally a different model of advocacy and a different set of risks and options than you have in something like model development. So I think being able to take a bold view, sit down with policymakers, have a considered dialogue about the menu of options, I think that&#8217;s where small teams can have as much and sometimes more impact than large organizations.</p><p>Matt (37:26)</p><p>Of course there are downsides too. Are there policy debates you&#8217;ve been in so far where you&#8217;ve thought over significantly disadvantaged relative to larger players?</p><p>Ben (37:36)</p><p>I think larger players are sometimes perhaps blind to how these proposals impact small teams doing frontier work, right? Teams in our position. You have proposals out there that are very straightforward for a vertically integrated provider and very difficult for a provider who just forms one part of the tech stack. Those proposals are a relatively minor compliance overhead for a big tech firm and an absolute massive compliance ordeal for a small business in our position. And as I&#8217;ve said, you&#8217;ve got proposals out there that are relatively straightforward for a closed source provider and very, very difficult for an open source or open-weight developer. And so I think sometimes, larger organizations have so many competing interests in equities internally that they sometimes don&#8217;t understand the second and third order consequences of these proposals. And as a result, the effect of these proposals on small teams like ours gets lost in the noise.</p><p>So one thing I think has been promising is that a lot of policymakers, including many who we might disagree with on principle, are realizing that there is a more diverse community of developers out there than just the big fang or mango companies. And they&#8217;re willing to sit down and listen to these smaller teams and see how they can adjust these proposals in more targeted, more precise ways.</p><p>Adam Chen (38:54)</p><p>And overall, we&#8217;re very heartened by the fact that policymakers are aware that they often listen to a lot of policy discussions or have a lot of policy discussions with Big Tech policy teams and have less of them with smaller tech and smaller companies just because of a pure resourcing issue. And I think the fact that they&#8217;re willing to talk to us and willing to take our point of view into consideration means that these discussions do have an audience and it does have an impact. Providing that point of view, especially as to how some sort of policy could have an onerous impact on much smaller teams, I think is quite critical too. We often saw and heard when GDPR was put into place, as well as when it was implemented, that GDPR could actually kind of slow down Big Tech expansion within Europe when really after it was implemented, the exact opposite was the case because of how many lawyers the Big Tech companies could throw at this, being able to articulate this point of view to policymakers, I think that&#8217;s really powerful.</p><p>Matt (39:52)</p><p>Adam and Ben, this is a great conversation. Thanks so much for coming on the AI Policy Brief.</p><p>Ben (39:56)</p><p>Thanks, Matt. Appreciate it.</p><p>Adam Chen (39:58)</p><p>Matt, it is such a pleasure. Thank you so much.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[Early Signals on AI, Hiring, and the Workforce]]></title><description><![CDATA[A conversation with Deel&#8217;s Nick Catino on what global hiring data shows, where new roles are emerging, and how policy can help people prepare.]]></description><link>https://a16zpolicy.substack.com/p/early-signals-on-ai-hiring-and-the</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/early-signals-on-ai-hiring-and-the</guid><dc:creator><![CDATA[Matt Perault]]></dc:creator><pubDate>Tue, 31 Mar 2026 13:40:09 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192628748/b3a038a1ff5bd5a733100ed426b7e3a2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>There is broad agreement that AI will reshape the workforce. What remains less clear is how that shift will play out in practice for workers and employers alike&#8212;which roles are changing first, where new opportunities will emerge, and how best to navigate the transition.</p><p>Matt Perault sat down with Nick Catino, global head of policy at <a href="https://www.deel.com/partners/a16z.ecosystem?utm_source=podcast&amp;utm_medium=partner-sourced&amp;utm_content=a16z.ecosystem&amp;utm_place=organic-community">Deel</a>, to better understand what today&#8217;s data can already tell us. Through its HR and payroll platform, Deel works with 35,000 customers and 1.5 million workers across more than 150 countries, giving the company a broad view across employers, geographies, and job categories as AI begins to change hiring and work.</p><p>In this episode, Nick walks through what Deel is seeing firsthand. That includes a 40% increase in the share of companies opening new AI roles in 2025. Deel&#8217;s recent <a href="https://www.deel.com/global-hiring-report-2026/">global hiring report</a> also found more than 70,000 AI trainer roles across 600-plus organizations, with nearly 60% of those roles located in the U.S., and AI trainer positions emerging as the fastest-growing global role on Deel&#8217;s platform.</p><p>The conversation also explores what these shifts mean for policy. If AI is going to change how people work, Nick argues smart policy should focus on helping workers build AI fluency and new skills, supporting students as they prepare to enter the workforce, and giving startups the clarity they need as they hire and scale. </p><p>Nick brings a valuable Little Tech perspective, drawing on his experience building public policy functions at fast-growing startups. For founders thinking about why startups need a seat at the table, along with when and how to engage with policymakers, this conversation is especially worthwhile.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;18ffbf1e-e88d-4da9-8839-540cb01328e5&quot;,&quot;duration&quot;:null}"></div><p>Topics covered:</p><p>01:02: Deel&#8217;s global hiring view</p><p>02:15: Building a startup policy function</p><p>06:40: Data as a policy tool</p><p>10:08: Early signals on AI and the workforce</p><p>12:35: Job shifts and emerging roles</p><p>15:18: Policy levers to support workers</p><p>22:10: Why regulatory certainty matters for Little Tech</p><p>24:49: Scaling Deel&#8217;s data insights</p><p>27:08: The rise of AI trainer roles</p><p>29:24: Lessons from building public policy functions at fast-growing startups</p><p>32:53: Why policymakers want to hear from Little Tech</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Nick Catino (00:00)</p><p>You see some jobs go away, some jobs emerge, most jobs end up changing in some capacity. That&#8217;s what&#8217;s happening now. We&#8217;re seeing an increase of 40% in the share of companies with AI specific jobs. 70% of companies had moved beyond the pilot phase of AI integration. 91% of people told us roles have already changed within their companies.</p><p>Policymakers are hearing from big banks or even big tech non-stop, they want to hear from say, Little Tech. They want to hear from the startups. Make sure you have a seat at the table. Regulatory uncertainty or not knowing what&#8217;s coming out of Washington, if it could have a material impact on your business? That&#8217;s a pretty big problem. You need to see that coming.</p><p>Matt Perault (00:48)</p><p>Nick Catino, Global Head of Public Policy at Deel. Thanks so much for coming on the a16z AI Policy Brief.</p><p>Nick Catino (00:54)</p><p>Hey Matt, thanks for having me. I&#8217;m excited to be here.</p><p>Matt Perault (00:56)</p><p>So first things first, it would be great to have a sense of what Deell is. So tell me a little bit about the company.</p><p>Nick Catino (01:02)</p><p>Yeah, Deel is a payroll and HR platform. We help companies hire, manage, and pay their workforce globally. We have 37,000 and counting companies on our platform and 1.5 million workers. Customers include tech and AI companies like Anthropic, Reddit, Shopify. We have enterprise customers like Coca-Cola and DuPont. But a real bread and butter are startups and SMBs, those that are expanding globally to new markets for the first time.</p><p>And then we&#8217;re sitting here in Washington, D.C. as foreign companies that want to invest here, hire American workers. That&#8217;s really the value prop we offer. And what makes us unique, and particularly why I&#8217;m excited to have this conversation, is the real-time employment data that provides. We have access to seeing how companies are expanding to new markets, their hiring trends, lots of stuff very relevant right now as we sit in Washington and talk about AI and its impact on jobs and society.</p><p>Matt Perault (01:59)</p><p>I&#8217;m sure there&#8217;s a ton we can talk about on the business side, but this is a policy podcast. So, I&#8217;m really interested in understanding why Deel would hire a policy person? What is the value that you&#8217;re providing to the firm?</p><p>Nick Catino (02:15)</p><p>Well, we&#8217;re growing fast. So I joined two and a half years ago. We&#8217;re only six years old, going on seven now. As a company, you already hit a billion in revenue and lofty valuation, but really it&#8217;s because of the scale of the problem we&#8217;re solving. Historically, it was really hard for companies to expand to new markets and hire and manage their workforce on a single platform. They would have all these different systems all around the world. Now, because of Deel, they can use us for everything.</p><p>And ultimately, we&#8217;re helping them with compliance and how they hire, manage and pay that workforce and there&#8217;s a realization within the company that we have responsibility not just to tell our customers what the law is in markets, but we can actively partner with governments sometimes to shape it, but also to help answer questions for our customers and just be ever present. So really it&#8217;s a reflection of our growth and the problem we&#8217;re solving.</p><p>We recently expanded to having two people on the policy team. So someone in Europe, myself here in the U.S. I&#8217;ve had the opportunity here during my time at Deel to travel the world and talk to governments and introduce them to the company, share with them what we&#8217;re hearing from customers, our employment data, trying to be a resource and a partner to them.</p><p>And then ultimately, I do joke internally sometimes that everyone at the company actually works in policy. Because everything we put out, our product, the services, if you use that, and that&#8217;s true for any of your portfolio companies, you&#8217;re ultimately putting yourself in front of those external stakeholders like the government. So everyone has a role when it comes to policy or government affairs within your respective company because you&#8217;re creating what people are seeing and using and that ultimately has an influence on policy making.</p><p>Matt Perault (04:02)</p><p>I&#8217;m curious about how you decide how to allocate your time. Because I think when people hear like a billion dollars in annual revenue, that sounds like a large company. We&#8217;re focused at a16z on Little Tech companies. And I think obviously when you&#8217;ve had the success that you&#8217;ve had, you&#8217;re sort of growing out of a little phase.</p><p>But you&#8217;re still really different from lots of Big Tech companies, some of the ones that are typically in the front page of the newspaper. When you think about tackling policy issues, how are you deciding what you have bandwidth to do? One example is there&#8217;s a lot of activity now at the state level, like more than 1,000 AI bills. Are those the kinds of, is that something that you&#8217;re able to track with a lot of depth and closeness, or is that something that is really hard when you&#8217;re a team of the size that you are?</p><p>Nick Catino (04:48)</p><p>Prioritization is the right question. I often talk about ruthless prioritization. You need to focus on where you can drive the most impact. That&#8217;s not always necessarily tracking. There&#8217;s a lot of people that do that. But I view my responsibility and a policy function&#8217;s responsibility as building strategic relationships with policymakers and regulators and governments in tier one markets. And you&#8217;re explaining to people internally, because they often ask, what is policy? Do you write internal policies for the company?</p><p>No, our responsibility is to build those relationships, proactively shape our brand, and ultimately influence laws and regulations and policymaking on behalf of our customers and the broader ecosystem. And the way I organize my time, I try to think about proactive, reactive, and thought leadership and content.</p><p>The proactive is where we identify opportunities to work with governments, like we signed an MOU, a strategic partnership with the US Commerce Department a year and a half ago because they recognize that Deel helps facilitate trade and investments. We did a series of events around the world and here in the U.S. with the Commerce Department.</p><p>Reactive is when there are some regulatory challenges that emerge and you want to go meet with governments and explain the role you&#8217;re playing and how you operate and just be open and honest.</p><p>And in the thought leadership and content, I started my career in journalism, actually. And so that&#8217;s always one of the most fun parts of the brand building, like the AI and Workforce Policy Report we did in the fall, and the research and data-driven storytelling that we&#8217;re doing. Media, like what you and I are doing right now. And so I try to prioritize my time in tier one markets with the right stakeholders. it&#8217;s really important that you&#8217;re ultimately an external rep of the company so you don&#8217;t spend too much time internally.</p><p>Matt Perault (06:40)</p><p>So let&#8217;s talk a little bit more about the report that you were alluding to. Before we get into the meat of it, I really want to understand the specifics. Let&#8217;s talk about it in the context of prioritization. Why was a report on workforce trends something that you thought was a good way for you leading policy at Deel to spend your time on?</p><p>Nick Catino (07:00)</p><p>Yeah, it&#8217;s a conversation driver and it&#8217;s important to note the prioritization point that smart companies work with governments. I think there&#8217;s often a hesitancy to talk to regulators or governments too early when a company&#8217;s founded because you&#8217;re scared of what there is and that&#8217;s part of, to the point of why Deel created my role. It&#8217;s a recognition that we&#8217;re larger now and we want to be a proactive positive partner.</p><p>Matt Perault (07:24)</p><p>The phrase that I always heard was, &#8220;break into jail.&#8221; You don&#8217;t want to go and reach out when you don&#8217;t have an issue. But it seems to me like that&#8217;s kind of a mindset that&#8217;s shifting a little bit, that companies are now saying the risk is outweighed by the benefit of showing that you want to be a responsible player, you want to be in dialogue with government, you see the opportunity for companies to sell to government. It sounds like that&#8217;s what you guys are saying.</p><p>Nick Catino (07:46)</p><p>You put it great. It can be an opportunity driver, a growth driver, and it can help mitigate risks by being proactive with governments. Another phrase that used to hear a lot was, it&#8217;s better to ask for forgiveness than permission. And I think that flips a little bit as a company grows to a certain size, they realize governments are already paying attention to you. You&#8217;re already on their radar, so you might as well engage proactively.</p><p>And so right now, every government around the world is talking about AI and they want to understand how it&#8217;s going to impact their workforce among other things, and that&#8217;s really the role we can play. And so our policy report that started as a passion project, I was in Singapore about a year and a half ago, spoke at their AI summit, and I had a talk track on some of the changes that were happening in AI regulation and policymaking from 2024 to 2025. My premise was that a lot of that was driven by a change in the US administration and their approach and it resonated really well. I could feel the audience at that event during my presentation, during the panels I did, just really hanging on to my words. And so that&#8217;s what kicked off on my flight home, couldn&#8217;t sleep, I&#8217;m on my laptop, and I started jotting down some of the things I had been saying, some of it off the cuff, and that led to not just the narrative, we can get into some of that, but also then pulling Deel&#8217;s data, and what are we seeing? What are the trends that are happening? How are jobs being created, being eliminated, what are the new roles we&#8217;re seeing, and then what were some of our policy recommendations?</p><p>It started off as a passion project, but it ended up as a pretty comprehensive report we were able to take to governments and use that to lead conversations. As much as we can, kick off some data-driven partnerships where we would love to be a resource, seeing how AI companies are expanding and hiring, how industries are changing jobs, say less entry level hiring, how they&#8217;re creating new roles, and we&#8217;re sharing that productively.</p><p>Matt Perault (09:47)</p><p>So obviously there&#8217;s lots of anxiety about the potential disruption in labor markets. And that can come in a lot of different forms. But I think the general idea is that AI is coming for our jobs. It&#8217;s coming for your job. You won&#8217;t be employed in the future because of AI. What were you seeing in the data? How does it give you hints at how to think about answering that question?</p><p>Nick Catino (10:08)</p><p>Yeah, I don&#8217;t buy that premise and you may not either. I think ultimately we&#8217;ve seen throughout history there are technological changes from automation to even the impacts of global trade and you see some jobs go away, some jobs emerge, most jobs end up changing in some capacity. That&#8217;s what&#8217;s happening now. The difference may be the time scale. Automation may have taken many decades. It might be what automation did over 100 years, what trade did over 50, AI could do in the next 10. So that is scary to a lot of people and rightfully so. And so we understand, you see announcements coming out that you have those entry-level, repetitive tasks that can be automated or starting to be automated pretty quickly by companies. You see new types of jobs emerging, AI tutors, AI trainers. And so we commissioned research. We have our own data and lots of interesting stuff we&#8217;re finding in that.</p><p>But we also commissioned research into 22 different countries. We surveyed 5,500 business and IT leaders about the impact of AI on their companies and their workforce. And this was the end of last year, which already feels stale, by the way. Here we are a few months into the year because everything&#8217;s moving so fast. But what we found was pretty astonishing.</p><p>We found 70% of companies had moved beyond the pilot phase of AI integration and so now it was actually embedded within their companies in some capacity. 91% of people told us roles have already changed within their companies. That was probably the stat that stood out most to me. And then we also saw two thirds of entry level jobs are being impacted. So again, this was 5,500 business leaders across 2,200 countries. Country by country, the results were pretty consistent there. AI is now within their companies. They&#8217;re widely using it.</p><p>91% told us roles were changing and two-thirds says it&#8217;s probably going to impact or slow their entry-level hiring. The changes from AI are here.</p><p>Matt Perault (12:12)</p><p>And then, I mean, you said in general, you sort of think of the story that we will tell ourselves at some point in the future about the impact of AI on the labor force is that it will be a positive one. This is some preliminary data you&#8217;re seeing that there will be a change. Are you seeing things in the data that suggest to you that your positive outlook will actually be borne out?</p><p>Nick Catino (12:35)</p><p>I think so, the types of jobs that are emerging, I mentioned earlier, AI tutors, AI trainers, we&#8217;re seeing an increase of 40% in the share of companies with AI specific jobs.</p><p>Matt Perault (12:46)</p><p>So it&#8217;s not that the jobs are disappearing. It&#8217;s more the emphasis on the shift, not yes or no, but shift.</p><p>Nick Catino (12:52)</p><p>So look at the World Economic Forum. Every year they put out their jobs report, it gets a lot of attention. They found something like 93 million jobs would be eliminated by 2030, not all due to AI, other factors as well, but they found almost double would be created. And you&#8217;ve seen this in past shifts. I mentioned automation and trade. Now the unfortunate part of that, and I spent a decade up on Capitol Hill, I&#8217;ve been in tech for seven plus years, but spent a decade working in politics. And the perception of trade was always that while the benefits are widely shared, the pain is often very concentrated. And it&#8217;s the same thing here, I would expect. There might be types of people, those that don&#8217;t have the ability to develop AI fluency. They&#8217;re with companies not supporting them in their journey to move to AI integrated roles. So there are going to be some negative impacts for people. That&#8217;s why it&#8217;s so important that from schools and universities to the workplace, to even the government, that there is a focus on AI adoption and AI fluency. That&#8217;ll help protect against some of that. We should understand the concerns about job losses are not manufactured. Some jobs will go away, but a lot more will be created.</p><p>Technology has the ability to make life better. We&#8217;ve seen that throughout history. And I think it will. There are really difficult jobs that people had 100 years ago or 50 years ago. You think of a picture of a steel mill 100 years ago to turn into the 20th century. There were people all over the floor, in some cases, actually turning the molten steel. My own great grandfather died in a coal mine accident now 90 plus years ago. Those are difficult jobs that people had really hard taxing, not for much money. And so if technology produces better jobs, which it has, there&#8217;s no reason to think that trend won&#8217;t continue.</p><p>Matt Perault (14:50)</p><p>So this report tries to do a bunch of different things. outline the general issue, you provide this data, but then you also have this policy recommendation section. So what do you think, and you seem like the perfect person to blend all these things. You&#8217;re at Deel now, you&#8217;re looking at the data, you&#8217;re thinking about what the company&#8217;s objectives are, but then you also, as you said, you&#8217;ve got this background on the Hill. So making this transition from data to policy recommendations is a natural one for you. What do you recommend? Like what is the right set of things for Congress and the administration and maybe even states to do?</p><p>Nick Catino (15:18)</p><p>Yeah, the way we&#8217;re thinking about AI policy at Deel, first, we help our companies comply. So we try to stay relatively neutral, whether the law is viewed as good or bad, complicated or easy. We&#8217;re ultimately helping them. We do think it makes sense to be easier. But the way governments have been looking at AI regulation, or at least AI initiatives now, I should say, over the last year focuses primarily on talent, industry, and infrastructure.</p><p>And so you see governments talking about, we need to make sure we have domestic AI talent. They&#8217;re fluent. They use AI. We succeed. The industry needs to support our startups and businesses, help them integrate AI into their services and infrastructure, chips and energy. So we&#8217;re thinking a little bit about it in the same way.</p><p>So on the talent side, it is important that governments support their workers and they build that fluency and they make sure that there&#8217;s the skills there in place. Upskilling exists, it&#8217;s needed now more than ever. We also have a mobility service within Deel. High skilled migration can plug that skills gap. So we&#8217;ve always been pretty outspoken in our support for companies. You should want the best and brightest within your countries. And there&#8217;s certainly an opportunity to attract AI talent. So that&#8217;s certainly part of the talent aspect.</p><p>When you think about industry, that&#8217;s making sure startups and SMBs have the resources they need. So a lot of governments have proposed grants or technical assistance to make sure that those resources are in place.</p><p>And then finally on the infrastructure, which yes, governments are looking at chips and energy, we&#8217;re thinking about it in terms of governance as well. Companies do have responsible to have an AI policy for responsible use in place, making sure the technology that they&#8217;re developing is ultimately used correctly.</p><p>And so our policy recommendations try to align with the way governments are thinking about it, the talent, industry, and infrastructure.</p><p>Matt Perault (17:19)</p><p>So let&#8217;s go a little deeper on the policy recommendations. We&#8217;ve advocated in lots of different contexts and it applies in this one as well, that we should really focus at least initially on using existing laws to achieve what we want to achieve in an AI world. It may be that we need to do various different things to adjust existing laws or plug holes, but at least as a starting point, we should focus on how we can use the laws that are already on the books to protect against harms or promote good positive things in an AI world.</p><p>I would love to get your take on what are the laws on the books, what are the policy tools that exist right now to help support workers in a transition to an AI economy.</p><p>Nick Catino (18:01)</p><p>Yeah, that&#8217;s a good point. And that&#8217;s often lost on people that just because there&#8217;s no federal AI law or regulation right now, it doesn&#8217;t mean that there aren&#8217;t applicable laws and regulations. There&#8217;s a lot of them. You think about everything from existing privacy laws at the state level to anti-discrimination, employment law, consumer protections, workplace safety. There&#8217;s a lot of laws and regulations in place today that are highly applicable to the AI tools being built, even whistleblower, if there were to be something identified that&#8217;s happening, that certainty is what companies need to operate under. It does not mean that there&#8217;s an absence of AI laws. While everything&#8217;s being debated, I think companies shouldn&#8217;t get too distracted by seeing something introduced. That comes up a lot in our company. A customer will see that a random bill was introduced in a state or in Congress and they want to know what it means for them. Anyone can introduce a bill that&#8217;s different from a law or regulation. So understand what&#8217;s on the books now and how it fits into what you&#8217;re developing, and that&#8217;s your roadmap.</p><p>Matt Perault (19:10)</p><p>In the conversations that we&#8217;ve had on this topic, one thing that has been, I think there have actually been two things that have been really striking. One is how bipartisan the conversations are. So Republicans and Democrats, at least to me, both seem concerned about the disruption. And then the second component is these conversations, I think, are less adversarial than others because really both sides are trying to figure out what is the set of things that we could propose that would support workers in the event that they&#8217;re disrupted. I&#8217;m curious about how, what the experience has been like for you when you&#8217;re taking this report to the Hill, to the administration, it seems like you do a lot of international travel as well, so to foreign governments, how receptive are they to the policy recommendations that you&#8217;ve included?</p><p>Nick Catino (19:52)</p><p>Yeah, first, the shift has been pretty striking from 2024 to 2025. Under the Biden administration, there was a focus on AI safety and governance and avoiding that terminator-like outcome. And then a lot of other countries had a similar approach, the EU with the AI Act, and you saw proposals elsewhere that focused really on AI safety. President Trump and his administration started focusing more on domestic competitiveness, and countries pretty quickly followed suit. And you see even the EU now is you could say rolling back parts of the EU AI Act. So that change in the policymaking environment is as stark on any issue as I can remember.</p><p>Matt Perault (20:31)</p><p>How does it affect what you&#8217;re actually like the issues that you&#8217;re wrestling with here in the report?</p><p>Nick Catino (20:36)</p><p>Yeah, it was good timing for us in the sense that we were helping our customers comply. We really started thinking about how we could lean in on behalf of our customers, how we could leverage that data to help educate policymakers in 2025. We&#8217;ve really proactively engaged, it&#8217;s those, the hiring trends is what&#8217;s really exciting for a lot of governments outside the U.S., particularly in Asia Pacific recently, governments were excited to see that some of these Big Tech and AI companies were expanding to their markets. That&#8217;s what they want to see, that development of talent. They want to see that adoption. At the same time, there is a little bit of dare I say, protectionist element going on right now where the U.S. is in the lead, I would say, whether you agree or not on AI, other countries want to make sure they&#8217;re not dependent on the U.S. or China for that technology.</p><p>Kind of like with cloud right now, most of Europe uses American tech companies for the cloud. I&#8217;ve heard that come up a lot where there&#8217;s a desire, particularly within Europe, to make sure that they have their own thriving AI ecosystem. There&#8217;s a lot of resources being imported now to ensure they&#8217;re not relying on the American companies. So that&#8217;s been certainly a through line, a little bit of a championing your AI industries domestically and AI talent domestically, infrastructure domestically. So that is the number one trend we&#8217;re seeing.</p><p>Matt Perault (22:07)</p><p>How does that affect Deel or the companies that you&#8217;re working with? It&#8217;s a business opportunity but at the same time you have your own compliance that you have to do as well so yeah so you have your own compliance cost.</p><p>Nick Catino (22:10)</p><p>Yeah, we certainly hear from our customers all the time about how difficult it would be to comply with 150 different regulatory regimes. There&#8217;s certainly a desire to have cross-border harmonization on whatever regulatory approaches emerge. The more difficult it is, we&#8217;ll ultimately help them comply. We&#8217;re pretty agnostic in that regard.</p><p>Nick Catino (22:37)</p><p>Yeah, and you could make the case and people say that sometimes. Well, if employment law is very fragmented at the US state, local, or globally across different countries, that&#8217;s good for you all. Same thing on AI regulation. We&#8217;ve always taken the approach that simpler is actually better. In the EU right now, it was recently proposed to not require registration for entities in all 27 different EU countries.</p><p>We support things that make it simpler for our companies to do business. Ultimately, we know they&#8217;ll need us. So harmonization, simplicity, I think ultimately that&#8217;s a good thing. That&#8217;s our position.</p><p>Matt Perault (23:15)</p><p>A lot of people, when we have conversations about regulatory patchworks and compliance costs and what you&#8217;re describing as 150 different regulatory regimes, I think people kind of like roll their eyes sometimes because they sort of think that what the general thrust is just trying to get regulation down to zero. That&#8217;s not what we have advocated for as a firm, but I think people have that sense that that&#8217;s what the implication is. And often when you sort of go quickly to what the solution is, I think it can enable you to not spend enough time really wrestling with the problem. It sounds like from your perspective, you do hear routinely from companies that you work with about the potential compliance challenges and yes, you&#8217;ll help them when they need help, but you are hearing from them about how difficult it is just to comply. Sounds like just on the labor and payroll type issues across different jurisdictions. Is that accurate?</p><p>Nick Catino (24:02)</p><p>Yeah, and clarity is key. There is a belief sometimes that it&#8217;s, oh, maybe the tech industry doesn&#8217;t want regulation. I think, at least from our perspective, a lot of our customers, they just want certainty. I get more questions about the uncertainty over what&#8217;s being proposed and how it will impact them than I do once a law or regulation passes. Because then at that point, well, that&#8217;s just what you have to do. That&#8217;s how you comply. It&#8217;s always the uncertainty that is pretty taxing and creates issues for a lot of our companies. So certainty should be the goal.</p><p>Matt Perault (24:31)</p><p>So I&#8217;m curious, going back to you leading this policy function and just thinking about how you&#8217;re going to prioritize different things. You&#8217;ve had this experience with the report. What does that suggest about how things look going forward? Like, are you guys going to continue to do regular labor updates or similar kinds of research projects?</p><p>Nick Catino (24:49)</p><p>Yeah, Deel just hired its first economist, and part of the goal is to move away from what had been a little bit of an infrequent cadence of putting out our data and storytelling.I think of in our space, employment, ADP, they put out their monthly jobs report, and you see the entire market and economy, at least domestically here in the U.S., follows the ADP report that comes out the week before the official US government&#8217;s jobs report.</p><p>Well, we&#8217;re really the only company that has access to all worker types globally. Now, we&#8217;re still scaling. We certainly don&#8217;t have the depth of stats that ADP would have here in the U.S. But we have insight into some global trends across all worker types that likely no one else has. That&#8217;s certainly where we can be a resource to governments and just to other external stakeholders.</p><p>Matt Perault (25:44)</p><p>What is the thing that you, running this policy function, that you most want governments to understand that it seems like they don&#8217;t, that constantly kind of comes up in meetings where you hope over time that there will be a shift in the conversation?</p><p>Nick Catino (25:57)</p><p>My goal with policy is that we&#8217;re viewed as a trustworthy partner. And we have so many of not just the customers, the businesses that we&#8217;ve talked a lot about today, but the workers themselves. And so we can be a resource as they think about not just how to regulate the tech industry or AI companies or businesses, but we see what workers have in terms of benefits and what they still need particularly in the US, from health insurance to HSAs and FSAs and other countries have pension plans. There&#8217;s so many elements that are worker-centric. We have one and a half million workers on our platform already. We&#8217;re getting closer and closer to the workers. The customers are the businesses, but we&#8217;re really starting to become more worker-centric. And I&#8217;m excited for us to be a resource on how we can help ensure that workers have the resources they need and they have a voice with governments as well.</p><p>Matt Perault (27:00)</p><p>So in this conversation, we focused a lot on your workforce trends report, but you&#8217;ve actually recently released new data. Can you talk a little bit about that?</p><p>Nick Catino (27:08)</p><p>Yeah, Deel is releasing its global hiring report, it&#8217;s been on annual cadence in the past. The next one that&#8217;s gonna be out by the time this is heard, we&#8217;ll see AI trainer positions having grown 283 % cross border in 2025. We saw most of these jobs, the majority of them actually, in the US. And so these AI trainer roles were seeing something like 70,000 workers for 600 companies. And so they&#8217;re helping develop and refine AI systems. And that&#8217;s the human feedback that&#8217;s ensuring that the systems are learning and getting better based on the prompts they&#8217;re getting and the output. And so it&#8217;s exciting to see such tremendous growth. We talked a lot about jobs disappearing. Well, here&#8217;s a classic example of jobs emerging. And that&#8217;s a considerable amount, 70,000 just on the Deel platform alone. The majority of them were in the US, as I said already. And these are just an incredible amount of software developers and AI engineers that are being hired globally right now.</p><p>Matt Perault (28:16)</p><p>I think one thing that people are hungry for now is really metrics, like that people want to understand what is happening in labor markets. Sounds like this is a report that you&#8217;ve done this year, you&#8217;ve done it in the past. Are you expecting to do it annually going forward?</p><p>Nick Catino (28:29)</p><p>Yeah, we&#8217;ve done it annually in the past. This will be at an annual cadence right now, but we need to be doing this more often. So we&#8217;ve been supplementing it with blogs or short white papers and really just trying to increase the cadence of how we&#8217;re sharing our data because there&#8217;s certainly a desire for it from governments. And not just when you talk about policy, that&#8217;s not just the officials themselves, it&#8217;s also the think tanks and academics. And we have a series of academic partnerships underway where there are universities using our data, of course, anonymized and aggregated, no personal information shared, but that way they can also see trends and use that data to decide what that means, what it&#8217;s telling us.</p><p>Matt Perault (29:13)</p><p>So you came to Deell a couple of years ago, you&#8217;ve talked about you growing your team and what you&#8217;re trying to do here. Can you talk a little bit about your background? How did you end up at Deel?</p><p>Nick Catino (29:24)</p><p>Yeah, I started my career in journalism first. I was a high school kid, played sports, I was on the high school newspaper and had the opportunity to go write for a local newspaper that&#8217;s pretty big. It was the top 80 paper in the U.S. in Allentown, Pennsylvania. And I was writing the sports stories about high school games and I was even in a newsroom on the evening of 9-11 when President Bush addressed the nation and it was just one of those classic newsroom moments that really kicked off that passion for me, and somehow ended up going from journalism to politics.</p><p>Instead of maybe writing about it, I wanted to be part of the actual newsmaking, and spent a decade on Capitol Hill, mostly on the Senate Banking Committee. I led the International Finance Subcommittee and worked for a bunch of members while I was up there and had the opportunity while on Senate Banking to negotiate a major law in 2018 that made changes to Dodd-Frank. And that was the first major banking legislation at the time in nearly a decade.</p><p>And after that, I decided I did not want to go become a big bank lobbyist. I knew the way I used financial services had changed. On my phone, I used Betterment and had Robinhood, Coinbase and Venmo. It was very different from what my parents were using. And I discovered this payments company called TransferWise out of the UK that was looking for someone in the US, New York City-based to lead their consumer education campaigns about the cost of sending money abroad, hidden fees that exist. Fortunately for me, they hired someone DC based and it was a bit of like a policy job with grassroots consumer education and helped ultimately scale that function there, led the global team within about a year of joining what became Wise, spent five years there and had the opportunity to go to governments and help them modernize their payment systems.</p><p>We should have real-time payments. In the US, it&#8217;s finally here, though not fully adopted yet. Still takes a couple of days to get a paycheck. We got to raise awareness about the hidden fees that exist in cross-border payments, make the case that fintechs, non-bank payment companies, should have access to payments infrastructure and open banking was something we spent a lot of time on, not just in the U.S., but around the world.</p><p>About two and a half years ago, I finished my time at Wise when Deel was looking for someone to help build their function. And I&#8217;d really enjoyed that zero to one phase of my time at then TransferWise, where you&#8217;re building a function and helping build the policy muscle within a company. We&#8217;re explaining to them why it&#8217;s smart to proactively engage with governments. That&#8217;s what the smartest companies do. We talked earlier about not asking for permission afterwards. I do think it can be a growth lever when you&#8217;re proactively engaging with governments. So it&#8217;s been a great journey at Deell. We&#8217;ve continued scaling and growing and building those relationships around the world.</p><p>Matt Perault (32:27)</p><p>It seems like you really deliberately chose small, not big. You were thinking of like, what is this Hill experience? What can I do with it? And big banks were kind of the natural path and you deliberately chose small. So can you talk a little bit more about just what it&#8217;s like to be at a startup? How is that different, you think, from how a big path would have looked? And what advice do you have for startups or for people who work on policy at startups for thinking about how to grow this part of their work?</p><p>Nick Catino (32:53)</p><p>Now, I&#8217;ve been in tech seven years, so it&#8217;s second nature. I&#8217;ve worked remote mostly for seven years, spent a lot of time traveling, including at Wise&#8217;s in the New York office. But I remember when I left Capitol Hill after a decade of wearing suits and being in the office every day, and then a week later, I&#8217;m going on the train up to New York. I&#8217;m overdressed, and I felt very conscious of being overdressed. Yeah, and so now I&#8217;m going in a hoodie and jeans in the New York office.</p><p>So I certainly felt like a fish out of the water at first when I joined Wise, but it was exciting to be part of their journey. And so some of the lessons that I learned both in building the function at Deel now when I was at Wise is, yes, we talked earlier about it&#8217;s smart to engage with governments. Let&#8217;s take that as a given. It&#8217;s internally building that muscle that is really difficult. You&#8217;re helping your colleagues understand why you would want to be attuned to some of the, whether geopolitical or macro shifts happening, how there&#8217;s a chance maybe to influence some of the policymaking as it goes through the process. You know, policymakers are hearing from big banks or even big tech, nonstop. They want to hear from, say, Little Tech. They want to hear from the startups.</p><p>That was part of the motivation when I was at Wise, we were co-founder of the U.S. Financial Technology Association, which is now a massive player here in D.C. They have a big staff, all the big financial technology companies are part of that trade association. It&#8217;s been incredible to see them grow. But at the time when I joined, then TransferWise, it felt like fintech did not have a voice in Washington. So I think from an influence perspective, to make sure you have a seat at the table, it is important that startups identify other like-minded companies. And if you face an existential threat, say, you understand the regulatory landscape and there&#8217;s something that could have a negative impact on you or there&#8217;s a level of uncertainty and you need certainty, you should find the other stakeholders that would care about this, so build a little coalition. And you should probably identify if there&#8217;s a trade association in Washington that can help you go to the government and put your voice at the table and things might turn out in your favor. Because that regulatory uncertainty or not knowing what&#8217;s coming out of Washington, if it could have a material impact on your business, that&#8217;s a pretty big problem. You need to see that coming. And it&#8217;s hard when you&#8217;re a team of 50 people or 100 people. So that&#8217;s my advice to startups: make sure you&#8217;re aware of what&#8217;s coming that could materially impact your business so that you&#8217;re planning for it.</p><p>Matt Perault (35:33)</p><p>Nick, thanks so much for coming on the podcast.</p><p>Nick Catino (35:35)</p><p>Great. Yeah, thanks for having me. Really enjoyed our conversation. Look forward to being a subscriber to your feed from here on.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[Cyber Resilience in an AI World]]></title><description><![CDATA[Anne Neuberger, Jai Ramaswamy, and Sam Jones on what changes when cyber attacks and defense run at machine speed, and what&#8217;s required for protecting critical infrastructure.]]></description><link>https://a16zpolicy.substack.com/p/cyber-resilience-in-an-ai-world</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/cyber-resilience-in-an-ai-world</guid><dc:creator><![CDATA[Jai Ramaswamy]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:35:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191438646/d05dc25b09945a92eae211a65e6d84f1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>One pillar in <a href="https://a16zpolicy.substack.com/p/a-roadmap-for-federal-ai-legislation">our roadmap for federal AI legislation</a> outlines a policy framework for protecting against catastrophic cyber and national security risks. A core part of that work is building a better understanding across government of what changes as we move toward an AI-vs-AI cybersecurity landscape where both attackers and defenders can operate at machine speed.</p><p>In this conversation, Anne Neuberger and Sam Jones join Jai Ramaswamy to go deeper on what this shift looks like in practice. Neuberger draws on nearly two decades in government&#8212;including serving as deputy national security advisor for cyber and emerging technology&#8212;to explain how AI is transforming the threat landscape, most notably making attacks faster, cheaper, and continuous at scale. Jones brings the builder&#8217;s perspective as CEO and cofounder of Method Security, where his team is building autonomous cyber systems for both offense and defense observing firsthand how AI is accelerating everything from routine tactics to exploit development.</p><p>Together, they discuss what &#8220;cyber resilience&#8221; means in an AI world: continuous testing and red-teaming that was previously cost-prohibitive, clearer benchmarks for critical infrastructure, and faster recovery when disruptions happen. They also walk through the policy measures that can help defenders keep pace, including enabling high-fidelity information sharing and modernizing procurement so the U.S. can deploy the defensive capabilities needed for a new cyber era.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;82679afc-5489-46d4-9b79-cf90e64dc585&quot;,&quot;duration&quot;:null}"></div><p>Topics covered:</p><p>02:01: How AI changes the threat landscape</p><p>06:23: Net new risks in an AI-vs-AI cyber world</p><p>10:06: Building trust to deploy new technology in no-fail systems</p><p>12:40: Cybercrime at machine speed</p><p>16:03: Who benefits more from AI: attackers or defenders?</p><p>18:14: Tactics to remove friction for defenders</p><p>21:11: Real examples of incidents where AI could have changed outcomes: Colonial Pipeline and Change Healthcare</p><p>26:03: What cyber resilience means in an AI world</p><p>29:42: Measuring resilience</p><p>35:44: Information sharing and antitrust: lessons from financial services and telecom compromises</p><p>43:44: The builder&#8217;s view: what Method Security is building for offense + defense</p><p>48:12: Little Tech realities of building with a small team and selling into government</p><p>51:38: The role of procurement in ensuring defensive systems keep pace with adversaries</p><p>54:17: What&#8217;s next: in-year buying flexibility and closing thoughts</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>This transcript has been edited lightly for readability.</em></p><p>Anne (00:00)<br>AI fundamentally gives attackers the ability to jiggle every virtual doorknob continuously.</p><p>Sam (00:06)<br>The institutions and networks and enterprises that underpin our way of life need to be tested continuously and not just in some lightweight scan, like seriously acting as an aggressor and seeing if there are holes throughout the enterprise.</p><p>Anne (00:21)<br>The threat landscape is fundamentally changed is also why the cyber defense landscape will potentially fundamentally change for the better too.</p><p>Jai (00:35)<br>Welcome to the a16z AI Policy Brief. Today we have Anne Neuberger and Sam Jones. Anne and Sam, would you mind just introducing yourself to the audience?</p><p>Anne (00:44)<br>Absolutely, sounds like a first grade literature book, Anne and Sam.<br>I&#8217;m Anne Neuberger, I&#8217;m a senior advisor at Andreessen Horowitz, and a distinguished fellow at Stanford. I was previously deputy national security advisor for cyber and emerging technology in the last administration. It&#8217;s great to be here.</p><p>Jai (01:01)<br>Welcome.</p><p>Sam (01:02)<br>Sam Jones, I&#8217;m the CEO and co-founder of Method Security and have been working in defense tech my entire career including cybersecurity and been doing AI since it&#8217;s become a meme and such a hot topic and excited to get into it today.</p><p>Jai (01:15)<br>Well, really appreciate having both of you on the podcast today. Now, Anne, Sam, both of you have written, I think extensively, on the issues surrounding AI and cybersecurity. Very hot topic, clearly one of the risks that I think people are most scared about. And I think the real question is whether this technology really changes the threat landscape, and if so, how, and if not, why not? But I&#8217;m going to leave it pretty open-ended just so we can get the conversation started. But Anne, given your extensive experience, I&#8217;d love to hear your thoughts on that kind of a landscape.</p><p>Anne (02:01)<br>Absolutely. AI fundamentally transforms the threat landscape because typically an attacker to achieve their objective, whether it&#8217;s stealing information or whether it&#8217;s disrupting a system, needs to do some degree of study of the network, of the administrator who has rights on the network, where are their vulnerabilities on the network, and then what&#8217;s the right way once you get a foothold to navigate through and achieve the attacker&#8217;s objective.</p><p>AI fundamentally gives attackers the ability to jiggle every virtual door knob continuously. So even if a new patch comes out and the company is bit late or they deploy code that has new vulnerabilities in it, it can be found so much quicker by an attacker who can just continuously be looking for either misconfigurations in the network, coding errors that allows that can be exploited, and to do so continuously at scale across a target set, so it fundamentally transforms that. I will say that what&#8217;s so interesting about cyber is that the first steps of either exploiting or defending a network look exactly the same. It&#8217;s finding the mistakes, the misconfigurations and vulnerabilities. What you then do with it is the key. So I think that, what I just talked about, why the threat landscape has fundamentally changed is also why the cyber defense landscape will potentially, if we can get our act together quickly, fundamentally change for the better too.</p><p>Jai (03:24)<br>Sam, your thoughts?</p><p>Sam (03:26)<br>From my perspective, just because I&#8217;m constantly building with this stuff and Anne aptly pointed out, regardless of your objective is to attack or to defend, those first steps are the same. And that&#8217;s what we build systems to do, whether it is to defend or to attack. My team and I are building with this stuff constantly. And we&#8217;ve seen a huge evolution over the last two and a half years, which is the lifespan of Method Security.</p><p>Where last year I would characterize how the threat landscape was changing is that purely known tactics, techniques and procedures were accelerating and everything was just getting faster, but it&#8217;s all known things that were happening. And that still poses a great threat to old defensive systems that can hardly prevent basic attacks and basic threats. And we were seeing a lot of criminal groups that maybe were novice in their skill level just be upgraded. And that poses significant challenges for enterprises that have bureaucracy in place to adopt and deploy defensive techniques because they&#8217;re just purely outmatched at that point.</p><p>But this year, what we&#8217;re seeing is pretty different. AI is now being quite effective at exploit development. And so it&#8217;s actually helping generate new tactics, techniques, and procedures to complement that existing acceleration. It&#8217;s getting doubly bad in the threat landscape, all the more reason that achieving resilience and defensive posture is an urgent problem.</p><p>Jai (04:53)<br>So I think what I&#8217;m hearing from both of you is that the way AI changes the threat landscape is &#8211; I was an old cyber crime prosecutor and the days of Matthew Broderick, load hackers are long gone. These are organized, transnational criminal organizations, nation state actors and a combination of the two.</p><p>But what I&#8217;m hearing is that one of the differences is this now allows a less sophisticated class of actors to get involved because the level of sophistication you need comes down because AI is commoditized intelligence and therefore a lot more people can do it. And then speed, scale, all those other things really are different.</p><p>One of the things that it seems to me that is in a sense good about this technology is that the capability aspect of AI is not asymmetric. There are some technologies that will benefit attackers more than defenders. But with this technology, I think what I&#8217;m hearing is that there&#8217;s a sort of symmetry that both attackers and defenders can use this. And so talk to me a little bit about that. And based on that, what are the real net new risks? Because I think it&#8217;s always helped to think not just, what are the risks? What&#8217;s new about this technology? What do we need to worry about and then deal with from a policy perspective?</p><p>Anne (06:23)<br>So I want to answer the two really good points and the two questions are fundamentally linked because AI does make attack far easier, much more continuous, and also continuous in the types of attacks that are possible and evolving them. And as such, defenders who aren&#8217;t using AI in defense are in a much worse position because a human against a machine in this game just doesn&#8217;t win. Humans and machines fundamentally can. But there are certain parts of cybersecurity that speed now matters much more, and they have to be automated.</p><p>Think about this. We&#8217;ve long put in place, particularly post the rise of insider threats, monitoring to say what&#8217;s a pattern of behavior, particularly for a particular role, where a person works, what their typical work pattern is, when they come to work, what are the kinds of data they are authorized to look at if you&#8217;re working on Asia bond markets. It&#8217;s something odd if you&#8217;re looking at South American, something, South American equities or whatever it is. And as such, but in the past what we would do is to avoid false positives and potentially disrupting somebody&#8217;s work, that would alert and send it to a network administrator who would investigate and say, was this legitimate? Somebody&#8217;s just working on something, a new project, or was it something malicious? Now, there&#8217;s a need to automatically turn off the account and investigate afterwards because you can&#8217;t risk that speed of what&#8217;s possible in attack. So that is the net risk. Very new kinds of adaptable attacks and that if defenders don&#8217;t deploy AI-based defense with speed and determine, frankly, the risk equation differently, we&#8217;ve often been worried about legitimately false positives. I think now, because of the speed issue, we&#8217;ve got to really turn the dial on automating as much of the fence as possible and then ensuring that we can do investigations rapidly quickly to turn things back on if needed.</p><p>Jai (08:28)<br>I mean, that puts a premium on the types of solutions that are out there, it seems to me. And Sam, you&#8217;re building some of these solutions, but that can be really disruptive. I mean, if I&#8217;m somebody who needs this information, particularly in the financial markets or something where actually it&#8217;s not just for the attackers, but for the business, real time is real cost. How do you think about developing solutions in that kind of environment?</p><p>Sam (08:54)<br>I work back from the buyer psychology, if you will, whether that&#8217;s an enterprise like a Fortune 500 enterprise, CISO, or a military DCO or OCO commander, both of all those groups have to accept serious risk when employing new technologies, because their domain is no fail. You know, they&#8217;re not making sloppy internal development apps that could be fun. They&#8217;re not making images or art. If they&#8217;re work goes down like they get fired for a specific reason&#8230;</p><p>Jai (09:26)<br>Or there&#8217;s an actual threat to the infrastructure or to safety that&#8217;s at risk.</p><p>Sam (09:30)<br>Which then they get fired anyway, so they rightfully are skeptical of employing too much new technology too quickly. But at the same time they also know they need to adapt to win and so what we have been trying to do is come from the stance of AI is almost like a raw intelligence material is inherently Not trustworthy. And so how do you build systems around that raw ingredient that no one should off-the-shelf trust such that it can be employed in production?</p><p>Jai (10:00)<br>Can you dig a little bit more deeply into what it means to be not trustworthy? What do mean by that in this realm?</p><p>Sam (10:06)<br>For a no-fail security system, whether that&#8217;s an inline security control that&#8217;s going to make a decision of classifying this as good or bad behavior and shutting down an account or something like that, or deciding to delete data, whatever it might be, leaving things up to too much nondeterminism is really scary for folks that this is their job and there&#8217;s no tolerance for failure.</p><p>AI, especially this class of large language models, is so non-deterministic and it&#8217;s evolving so quickly that you&#8217;re not going to take chat-GPT off the shelf and decide to put it into a critical security control. There&#8217;s just no way. And so there has to be guard rails like before and after and all around anything at the AI core of some kind of security product. That&#8217;s what we&#8217;ve tried to do very thoughtfully, both for defense and offense. And it&#8217;s going over well, but it&#8217;s a lot more to build than just making a really cool prototype that is on the internet that looks really cool. We&#8217;ve been trying to architect for what is actually viable from someone who will get fired if this doesn&#8217;t work.</p><p>Jai (11:14)<br>It&#8217;s interesting you raised that point because oftentimes when we&#8217;re talking about AI, the benchmark seems to be perfection. But I want to push you on that a little bit because human beings make mistakes all the time. But are you saying that the psychology of organizations is such that when the human being makes the mistake, you&#8217;re going to be forgiven for it because they see, we&#8217;re giving you a certain amount of tolerance because you&#8217;re a human being. But if you put money on the line or you start relying on technology, my standard is different than it is when judging a human being.</p><p>Sam (11:48)<br>A little bit and we&#8217;re probably going to see some similar psychology with Waymo&#8217;s and Tesla&#8217;s going out to the road, which are definitively safer, but you know, we&#8217;re accepting a different type of robot risk. I think what&#8217;s different about cyber is that these technologies can be deployed so prolifically across the enterprise that they can make so many decisions that compound even lower error rates, like to a higher degree that it&#8217;s almost like you&#8217;re not in control.</p><p>Jai (12:12)<br>So there is a new risk in that. The compounding effect is a net new risk given the speed.</p><p>Sam (12:18)<br>Exactly. Ultimately, what security teams want is to be in control. And AI off the shelf does not allow them to do that. And so there&#8217;s a lot of systems around it. And that&#8217;s kind of what we focus on.</p><p>Jai (12:28)<br>And you&#8217;ve both talked about speed. What other things, and now we have this sort of compounding risk, any other net new risks that you can think of that this technology introduces?</p><p>Anne (12:40)<br>You know, I&#8217;m struck by your comment of being a former cybercrime prosecutor.</p><p>I&#8217;d love to ask you back that question, but first reflecting on it, when we think about cyberspace, there&#8217;s really three sets of actors. You mentioned it at the beginning. There are pure criminals out for financial gain, the rise of ransomware in an entire, pretty sophisticated ransomware ecosystem in the last few years. The first set of groups who gets access and sells that access, the second who builds exploits, the third that does negotiations, the fourth that does the process of the payments and the decryption. We&#8217;re talking billions of dollars in ransom revenue that really drove this ecosystem. And the challenges of pursuing them was first, you have some countries that offer safe haven to the actors themselves, other countries that offer safe haven to the virtual asset service providers who don&#8217;t follow anti-money laundering rules. So actually the prosecution of the much more complex cybercrime ecosystem became far harder. The reason you saw groups of countries working across, and transnational rules don&#8217;t really exist, so they&#8217;d essentially be cobbling together an international system to deal with a criminal problem that crosses borders. Obviously, there&#8217;s also the set of threats, which are more countries seeking to disrupt at a convenient period of time. And then there&#8217;s the loose gray space of countries who usefully use hacktivists or actors to achieve their goals. And I think the speed on the action, while it can help in using AI to draw across, let&#8217;s say, here&#8217;s the malicious infrastructure that was used for an attack, it has become far harder to find the evidence because attackers can also clean up that infrastructure far more quickly as well.</p><p>So I think the attack infrastructure has become easier to set up, tear down, and our laws and policies in actually pulling the thread to investigate it and bring folks to justice and begin to deter it remain pretty much the same. So that speed is now playing against the ability to deter and counter these actors.</p><p>Jai (14:48)<br>I think that&#8217;s right. In the cybercrime space, one thing that really changed the game was the Budapest Convention. Sharing of information is hard in a prosecution. Cybercrime happens in real time. Legal processes take months, sometimes years, to put in a mutual legal assistance request. And so I think there probably does need to be some additional kind of infrastructure built to deal with the even now increasing speed.</p><p>But in some senses, my thought is that and I&#8217;d love you to validate this or not, as we move to what you described, Anne, as an AI versus AI world, I&#8217;m gonna date myself, but I think of Mad Magazine, Spy versus Spy, when I think of AI versus AI, but where AIs are essentially fighting against each other, for measure or countermeasure, offense, defense, how does AI tilt the playing field one way or another? Does offense have an advantage? Defense have an advantage? Would love to have your thoughts on who benefits more from this since both will benefit.</p><p>Anne (16:03)<br>Yeah, and I&#8217;m looking forward to hearing Sam&#8217;s thoughts too. Defense is fundamentally harder. And that&#8217;s why I think defense benefits more, in short. An attacker has to find one way in and especially on a large network, especially on a hybrid network, some of it on premise, some of it in the cloud, there&#8217;s likely to be a misconfiguration, a system that just was down when the patch went out or code that got pushed and somehow there was vulnerable code there. So it&#8217;s far harder for defense to always ensure that that entire space is safe.</p><p>On the other hand, so much of cyber defense today is still manual, and is still bringing together lots of different data to find what&#8217;s anomalous. And once you figure out what&#8217;s anomalous, you can figure out what&#8217;s malicious and then put in place the actual algorithms to then address that in the future. So I actually think given defense is harder and given defense today is still far more manual, AI will make a bigger difference, and leveling up the two has both become far more capable in each space via AI.</p><p>Jai (17:07)<br>And so I think what I&#8217;m hearing from you is that there&#8217;s more low-hanging fruit in the defensive sector. We haven&#8217;t, in a sense, maximized our efficiency on the defense side because it&#8217;s so human intensive, maybe for organizational institutional reasons, maybe just because it&#8217;s harder from a policy perspective to adopt defensive measures. I&#8217;d love to think through that a little bit more.</p><p>Do either of you have thoughts on how we get policy out of the way so that it doesn&#8217;t become an impediment for companies, for the defenders to adopt these technologies? Because we know that the criminals and the nation states have no restrictions on whether they can adopt this stuff. And so they&#8217;re adopting it in real time, whereas there are sometimes legal, policy, other reasons, maybe reputational reasons why the defenders don&#8217;t want to adopt this policy. What are the policy levers we have to encourage just to get them out of the way so that defenders can also adopt these things in real time?</p><p>Anne (18:14)<br>You know, I&#8217;ll start with a quick thought that actually builds on something Sam said earlier. This conversation, which is to say, AI is going to make attacking and attacks far more capable. So defenders have got to move out with AI. If CEOs say that, to their security teams, to Sam&#8217;s earlier point, that enables them to say, the key is you&#8217;ve got to move out fast. There&#8217;s risk of doing, there&#8217;s risk of not doing. The risk of not doing exceeds the risk of doing here. So move out and I&#8217;ll have your back. It&#8217;s some of the early rollouts we run into issues because any new technology, as you&#8217;re deploying it, the tech is immature, something can go wrong. And I think that what I&#8217;m struck by is sometimes defenders rightfully are worried that in early adoption, there&#8217;ll be issues and they&#8217;ll be caught up with that. So I think that push to say, we have no choice, we have to deploy and ways for companies doing those early deployments to compare notes. What have we learned about TTPs? What have we learned about bringing together different kinds of data? Which products are exposing the data needed so that you can actually use AI to bring that data together and run a next level of defense on that? What are the most effective code vulnerability scanners that actually generate patches that you can trust and deploy those quickly too? So you can close that cycle.</p><p>So I think a big part of it first is the executive leadership recognizing the dynamic we&#8217;re talking about and giving top cover for rapid deployment. And then second, the ability for companies to come together and actually share what they&#8217;re learning with a level of fidelity.</p><p>Finally, on a policy side, there are some areas where regulatory barriers exist in enabling that kind of data sharing. HIPAA is a good example. The rules around health information sharing, GDPR, EU rules around data sharing that prevent that, particularly cross-border. And that&#8217;s going to be key to getting the tech to work in the way that defenders need.</p><p>Jai (20:13)<br>Yeah, on that regulatory issue, one of the things that I think has been kind of disappointing to see is in the financial space specifically, you see that fraud detection way outstrips and the use of AI in that world way outstrips anti-money laundering. And a large part of it is the regulatory barriers. There are a lot fewer regulatory barriers to adopting AI in fraud. Whereas in AML, there&#8217;s model validation and other concerns that arise. And it can take months to deploy new AI. Those are the kinds of barriers I think about. But can you think of, in the incidents that you were exposed to given your kind of senior government experience, are there any that you can talk about where AI and the technologies you&#8217;re seeing would have made a meaningful difference in the defensive side?</p><p>Anne (21:11)<br>Yes, very much so. I&#8217;ll give two examples, one that is responsive by the way to your AML point.<br>You know, I recall after the Colonial Pipeline attack of May of 2021, that really opened a lot of people&#8217;s eyes to the fact that cyber attacks could be not just stealing data, but also fundamentally disrupting everyday lives, right? Cars queued up at the gas station, people couldn&#8217;t get gas. And frankly, our one pipeline that runs down the Eastern seaboard shut down for almost a week. That&#8217;s a major impact.</p><p>And at the time we were trying to figure out what was the scale of the issue because many companies that were affected would pay a ransom quietly and we didn&#8217;t have visibility. Things changed after that, but I recall a conversation with the CEO of a large bank. And he said to me, Anne, we know the ransom payments. And I said, really, how do you know it? He said, well, suddenly you have a company that never touched the crypto market before. There&#8217;s a cyber attack that&#8217;s public and they come in and they&#8217;re buying crypto and like, you know, immediately moving it to that point, right? It was just interesting.</p><p>But similarly, you know, when I think about, to your question, the most disruptive cyber attacks that occurred in the last few years. One of them was a company called Change Healthcare, a major division of United Healthcare. A third of all medical transactions in the United States go through them. A hospital billing, a pharmacy dispensing a prescription, and essentially, they were hit by a cyber attack, the clearinghouse. So all these transactions were not happening.</p><p>At the time, and the CEO said this publicly, it was caused by [not deploying] multi-factor authentication, an authentication for a user other than a password. And we know passwords have been compromised so many times. enabled the attackers to get on. And then the network was also not properly segmented and configured, so the attacker was able to move along. That is the kind of thing where, from an AI perspective, firstly, an AI monitor that&#8217;s looking to see an account that has just passwords. Find them, because today you shouldn&#8217;t have accounts. Certainly you shouldn&#8217;t have admin accounts with just passwords. Find those, lock them, which will get them to be changed. Similarly, in actually identifying a network and figuring out where it can be segmented, how to optimize it so that an attacker that gets a foothold can&#8217;t move along. Finding vulnerabilities at scale, all of that, is achievable from an AI perspective today in a way that previously for a complex network was a lot of manual.</p><p>Jai (23:37)<br>So again, the sort of automation versus manual dichotomy. And Sam, from your perspective, where does AI help the most in a real incident? Is it time-to-detection, understanding? Is it automation of processes? How do you think about where AI becomes most effective?</p><p>Sam (23:59)<br>Most effective right now in processing huge amounts of disparate data sets and kind of the like higher tier of the investigation, it&#8217;s certainly useful in the like time to detect in some sense, but a lot of times that time to detect you have a lot of weak signals that you&#8217;re trying to aggregate together to find a stronger set of signals. And there&#8217;s still a lot of data plumbing to actually make that possible. It&#8217;s not like you can just shove all of your like endpoint logs into Claude code and say like, is it good or bad? Like there&#8217;s still a lot of non AI data engineering work to even make that possible. And so in reality, a lot of the real time stuff is pretty hard, but fusing contextual data sets like intel, data that&#8217;s been shared with you past incidents and what you&#8217;ve done like across your, know, whether it&#8217;s JIRA or some other ticketing system, what did we do in this case? Like, is this normal? That&#8217;s pretty powerful and basically an unlock for free that most teams have.</p><p>But a lot of times, these AI systems need complementing other software systems that are just hard and messy to build, which prevent the full unlock of AI, if that makes sense. When you have someone that&#8217;s researching adversaries, researching what they&#8217;ve done in the past, researching a clean set of data that they&#8217;ve combed through on the incident, that&#8217;s basically free. But stuff that&#8217;s larger scale is actually quite difficult still from a technical perspective.</p><p>Jai (25:27)<br>And on that, I want to double click on that a little bit because I&#8217;d love to get your view on what does cyber resilience look like 10 years down the road when some of this stuff is actually out there and being used? And then a compound question, which I&#8217;ve always been taught we shouldn&#8217;t ask, but I&#8217;ll ask it anyways, is this just a question of deployment or is there, and you sort of mentioned this, but is there actual innovation and building that needs to take place? Talk about that. If it&#8217;s not just deployment, what are the things we still need to invent to be able to do this?</p><p>Sam (26:03)<br>I&#8217;ll answer your first question first with maybe some policy twists on where I think it&#8217;s most important to achieve resilience, where it also happens to be the most difficult. But I think, I mean, 10 years from, I don&#8217;t even know what&#8217;s gonna happen in 10 years. I look at like a year or two ahead, but we need to be tested. The institutions and networks and enterprises that underpin our way of life need to be tested continuously and not just in some lightweight scan, like, hey, you don&#8217;t have any internet exposure, like you&#8217;re good.</p><p>There&#8217;s no way that is actually helpful at all. Like seriously act as an aggressor and see if there are holes throughout the enterprise and use AI to do that to make sure that you&#8217;re okay. And enterprises and organizations that are playing in critical industries, this is where policy comes into play, probably need to have some kind of like continuous testing or red teaming because otherwise, I just don&#8217;t think you can play in some of these critical spaces, whether it be energy or like another recent incident that I think was pretty noticeable to most when we&#8217;re a bunch of airlines were targeted about a year ago or so. And they were using a third party contractor to infiltrate, to get access to systems and basically shut everything down. And, you know, nobody could travel in the U.S. for a while. It was hacking 101. Like it wasn&#8217;t even hard, but there were so many doors open now that you need to basically test those things.</p><p>And yes, a Fortune 500 airline should have the budget to be able to do that, but they need to be held accountable if they&#8217;re going to be playing in these different sectors to actually go enforce that. And then I think all on the commercial side, all comes back to the board of directors and the management team, which is, this is no longer optional. We have to do this. And yes, there are some policy things, you know, reporting breaches to SEC is useful actually as a forcing function. But I think you also need to know, like, we&#8217;re going to get hit hard every single quarter if we&#8217;re going to be playing in this market. And we&#8217;re going to be up to par here. I think that is something that we should actually be looking at doing.</p><p>Anne (28:10)<br>I just want to add, because I love Sam&#8217;s point so much, which is today the hardest problem I used to think we had in cybersecurity was, what is measurable resilience? I recall before Russia&#8217;s invasion of Ukraine, we knew an invasion was likely. And at the time, working in the administration, the President turned to me and said, Anne, how likely is it that Russia will conduct offensive cyber attacks against the US? And at that point, we had mandated the first ever minimum cybersecurity requirements for pipelines after Colonial Pipeline. But for every other sector, the US government had zero visibility on their required minimum cybersecurity baseline. So we actually didn&#8217;t have an answer to that. What I loved about Sam&#8217;s point is today we know attackers can be continuously jiggling digital doorknobs. We want the key critical infrastructure companies to do that first and then to tell us what is the likelihood that we believe that our network could be disrupted and not recoverable for a period of time. Because I think we want to have the resilience benchmark that a critical power, water, pipeline company, if they are disrupted, can be recovered in four to six hours. And I think if you&#8217;re continuously red teaming by AI agents externally and internally, you&#8217;re far more likely to know what are the paths in and how many of those have you been able to close or address.</p><p>And before that, to be frank, it wasn&#8217;t an answerable question at a cost ratio that was reasonable for companies to do.</p><p>Jai (29:42)<br>I actually want to drill down on something both of you have mentioned, which is metrics, measurability, evidence. Because it seems to me that for systems to be resilient, it can&#8217;t just be anecdotally based. It&#8217;s got to be evidence-based, measurement-based in some sense. And we also know just general management theory, what you manage to what you measure. And so as we think about evidence-based cybersecurity and the types of metrics we need. Sam, I&#8217;d love to hear your thoughts on what are the metrics that are useful? What are the benchmarks that are useful in developing a system?</p><p>Sam (30:25)<br>I think you just need to look at the enterprise as a whole and test, especially if we&#8217;re talking from a policy perspective, internal security teams will have much more fine-grained things that they&#8217;re working on to realize their security program. at a higher level, the efficacy to respond to a legitimate threat actor is the ultimate measure. And so I think everything works back from that.</p><p>Of course, historically, as Anne pointed out, the cost of actually emulating those threat actors as a red team has just been too cost prohibitive. Thus it has never really happened at scale. That is all changing now and it can actually be employed nationally at scale. I would say even today, that&#8217;s kind of the work that we do. And so I think something about can these organizations defend and recover from like legitimately looking threat actors at some kind of continuous cadence along certain metrics, whether that&#8217;s, you have to be able to prevent and block like this lower class of adversarial behavior. And then for everything above that, you need to be able to recover within some few hour period, depending on your industry and depending on the criticality. I think that&#8217;s like the ultimate measure. There&#8217;s probably something more to unpack that to make it a little bit more continuously enforceable, but we should be deploying probably nationally certified teams that are doing this for every publicly traded company or critical infrastructure at scale. And I think that&#8217;s you know, obviously a huge workforce opportunity for us, but also maybe one of the coolest ways to employ AI to really work on national resilience.</p><p>Jai (32:01)<br>And any other benchmarks you can think of that are important.</p><p>Anne (32:06)<br>I think the key one is, as I think about the different kinds of threats, criminals in countries, can we feel confident? The average American deserves to know that when they turn on their water, the water filtration system is working. When they go to the gas station, they get gas. And I think as we look at countries like China and North Korea, China in that case that we know is pre-positioned in critical infrastructure in the US and in our allies around the world, with the goal of disrupting it, either for military objectives, if you disrupt an airport&#8217;s operations, American service members deploy through our regular commercial airports. One, they can buy time in just delaying a military deployment all the way through to concerns that a Chinese-focused disruption would be to foster panic or, frankly, to put political pressure on the domestic population of Americans saying, why are we getting involved in a crisis around the world; Americans question that, how does this fit American interests?</p><p>From a national security objective as a country, we really do want to know that the country&#8217;s core infrastructure and military bases are secure enough from a digital perspective. So I think we mentioned that before, but I really want to put a metric on that. I think AI makes that achievable if we create a digital twin, for example, of parts of the power grid and test different kinds of attacks against it in order to identify the most effective and highest ROI cybersecurity.</p><p>There&#8217;s a cost in cybersecurity that&#8217;s been at the root of the policy failures and challenges we&#8217;ve had. The last time the Hill tried to pass a cybersecurity bill was in 2015, the Lieberman Collins bill that failed. And it failed because in that case, there were differences of opinion on a number of areas. But also the issue was who bears the cost. And there&#8217;s always this debate. I&#8217;ve had CEOs come to me and say, we&#8217;re up against the government. Our government should pay. The answer is, okay, but there&#8217;s some amount of cybersecurity you&#8217;re responsible for in your network and then agreed above and beyond that could be the government. But the rich work of AI now makes it a tractable problem from a cost perspective. And as such a measurable one from a national security perspective. I think it opens up a whole new interesting line of policy work that we can do between government and private sector to fundamentally get at a more secure digital ecosystem as our economy and our security is now moved to the digital ecosystem.</p><p>Jai (34:35)<br>Yeah, I mean you raise an interesting point which is that the infrastructure for the most part is private sector and that&#8217;s one of the hardest things about cybersecurity that the government does not control the thing that is vulnerable and and and you also had mentioned something that I think it&#8217;s worth drilling down a little bit more which is information sharing within the private sector If you talk to some different industry perspectives or people who have perspectives, they will say that they have some fears around information sharing from antitrust, from privacy, from a bunch of other things. We talked about policy levers. I think that it&#8217;s fair to say that the government sometimes can be skeptical that there are real legal barriers, but from the private sector perspective, there&#8217;s uncertainty and we know companies don&#8217;t like uncertainty and they&#8217;re not going to take risks where they feel that there is that uncertainty. But can you talk about the challenges around information sharing and whether there might be some clarity or policy levers or maybe not that are available?</p><p>Anne (35:44)<br>I&#8217;m very sensitive to those concerns. I worked across four administrations. I came into government in 2007 intending for one year. And as my husband likes to say, one year became 19 years. And I was across Republican and Democratic administrations. One thing that&#8217;s interesting to me is that the core of cybersecurity, how do you protect the nation in cyberspace, remains a bipartisan goal.</p><p>To your point, what role and responsibility the private sector has, what role there is of regulation, where government requires the private sector to do the things it needs to do, because there&#8217;s a level of assurance that we want to provide citizens, has been the crux of the debate. And I think to your point, after the salt typhoon attacks, Chinese attacks against telecoms, a number of the telecoms raised the issue you raised, which is antitrust concerns.</p><p>In some, I should say, that sectors like financial services have long had extensive intra-sector sharing. It&#8217;s the most sophisticated sharing. I think it builds on some of the existing processes there were across banking. But extensive sharing happens today across banking and financial services. And it began 10, 15 years ago. And they built it because they managed to first also crack the code of concerns that competitors would expose each other for brand impact. And essentially what they said was, if anybody leaks this once, you&#8217;re out. And the benefit, because attackers use techniques again and again, the benefit to a company of being a part of it is significant. So they&#8217;ve managed to retain that shared approach.</p><p>But at the time, when the telecoms raised it, I actually, you know, brought together NSC lawyers, Department of Justice lawyers, and said, please dig into this. They dug into it and they said, there are no antitrust concerns because companies are not competing on their cybersecurity, they&#8217;re competing on their products, cybersecurity is a shared goal. So we brought in the CEOs of the major telecoms and cybersecurity companies in the aftermath of the Chinese compromise because it was so significant. And we actually brought DOJ to the table and said, please answer this directly. And they did. That being said, I think that it&#8217;s fair for companies at the beginning of an administration to say, we want to hear this said to us again because of a concern that sometimes a perspective may change. But I think as we dig into it, one can see the legal perspective of it&#8217;s not a competitive factor. So as such.</p><p>Jai (38:12)<br>I think that&#8217;s fair. I think the hardest thing from the company perspective, and now I see it, I sort of had your perspective when I was at the DOJ, and now as a CLO, I&#8217;m seeing it from the private sector side. The challenge is that so much depends on an administration&#8217;s interpretation. We know through court cases that guidance is not considered regulation, and is not binding. And because increasingly we have a bit of a whipsaw problem, companies would like to have more certainty either in some sort of notice and comment rulemaking legislation, something that they can point their boards to and say, we&#8217;re not going to take on liability. And here&#8217;s why, to the point of, no kind of compliance officer wants to be fired because they made the wrong call. So I think that that&#8217;s where the rubber meets the road, I think in good faith, the government can say, we don&#8217;t see a problem here and yet the uncertainty can still exist. And there&#8217;s an interagency problem, there are many agencies involved, but that to me seems to be some of the friction that could exist. And you&#8217;re right, I think financial services has solved it, partially because they&#8217;re used to information sharing. There are all sorts of ways in which they share threat information on the cyber side, on the money laundering side, that allow that so they built that muscle memory and they&#8217;re now comfortable with it. So probably some of that is my guess.</p><p>Anne (39:45)<br>Your comment is also reminding me of another issue we saw. And again, look at the salt typhoon or the Chinese compromises of telecoms as a good one because the impact of that compromise was so significant. Essentially, the Chinese had compromised many large American telecoms, in some cases for a number of years. So they were positioned to collect conversations at will. They could also geolocate based on the nearest cell tower individuals. So really broad in terms of potentially the number impact and the depth of the impact. The other issue I saw in those interagency discussions to your point is we brought together the FCC as the regulator and the Department of Justice and the FBI. And there was a lot of resistance by the law enforcement community to being at the same table. But the FCC brought a set of tools, responsibility, and most importantly, knowledge of the sector that was very important, because our core goal was how do we prevent this from happening again?</p><p>And what I saw was at the beginning of those meetings, I chaired those meetings, I always said, we&#8217;re here for one purpose. This is the goal. We want to prevent this from happening again. We&#8217;re one team. So no particular agency moves out and does their own thing until we have a coordinated approach because there are a set of tools and instruments of US government policy. And some are competing, exactly to your point, Jay. And to be most effective, we need to work that together, I think we perhaps, we, I&#8217;m no longer in government, I think government can do a better job sometimes of recognizing that there are sometimes competing objectives across a regulatory approach, a law enforcement approach, a partnership approach, given the private sector owns and operates this infrastructure, and really actually documenting in these kinds of incidents&#8211;here&#8217;s how we&#8217;re going to work through these different tools most effectively so companies know what to expect. You&#8217;re raising a very fair point that it&#8217;s probably pretty unpredictable.</p><p>Sam (41:37)<br>Yeah, the financial sector is definitely the shining beacon of I think this working well, but I think that does have a lot to do with their concentration of budget and talent. And a lot of other industries that we would deem critical infrastructure do not have that luxury. I&#8217;m thinking power, I&#8217;m thinking water, I&#8217;m thinking grid, oil and gas to some extent probably has that, but that is where we probably need to have different types of policies applied to that infrastructure because the proactive information sharing that relies on pretty talented people to implement and run those processes probably won&#8217;t work there where they might have a single IT person that&#8217;s underpinning an entire water treatment plant. And if I were an attacker looking at the homeland, I just want to disrupt and cause chaos. That would be one of my target number ones.<br>We probably need to look at a different policy that is maybe more like enforced deployment of technology that is like government managed, which is a little bit similar to how the PLA runs like some of their like nationwide. And Anne has pointed this out in her foreign affairs piece last year where they have persistent monitoring across all critical infrastructure because there is no such worry about data privacy. I&#8217;m not saying we should relax to that extent, but we can&#8217;t believe that folks that have no budget and basically one IT person are going to effectively implement security controls even with AI. And so we probably need more enforcement there because that&#8217;s the underbelly that&#8217;s gonna get hit the hardest.</p><p>Jai (43:12)<br>That&#8217;s a great perspective. And I&#8217;d like to actually drill down a little bit more on that because one thing we really haven&#8217;t talked about yet is your perspective as a builder, a founder, a CEO. You bring, I think, a unique perspective. So, I would love to hear, to the extent that you can share, what is your company building? What are the things you&#8217;re most excited about? Give us the kind of ground-eye view of the founder.</p><p>Sam (43:44)<br>I am a defensive optimist like Anne on this, but I think it&#8217;ll be a rocky road to get to that ultimately. So that&#8217;s maybe a long-term view at least that I have, but we build autonomous cyber systems both for offense and defense with the mission of delivering cyber resilience to the United States. And that to us is a very dual use problem. The United States&#8217;, as we&#8217;ve been talking about, critical infrastructure is composed of public and private. And so that&#8217;s kind of the composition of our customer base.</p><p>But the company in many ways is a long time in the making from myself and my cofounders of our experience. My CTO and I both started our career working in government as cyber operators. I was working at the Air Force. He was working at the NSA. And we didn&#8217;t really have the tools we wanted to win. We obviously didn&#8217;t like the bureaucracy either. We wanted to just build. And so we both found our way back to Palantir. I was there 12 years ago. I think he joined like 13 or 14 years ago. My third co-founder joined 13 years ago where we built up a lot of data trade craft and knew how to build hardcore software systems, but then got a clear view into board level priorities as it comes to software and things like that. I joined this company called Shield AI. I was their 20th or so employee, which I know is another portfolio company, figuring out how to put AI onto drones back in 2018 when it didn&#8217;t quite work that well, but we got it working and now obviously they&#8217;re hugely successful.</p><p>And this company is kind of, we&#8217;re trying to fuse all these different experiences and disciplines into building the ultimate company that can actually deliver on this mission. And that&#8217;s why we started the company. We&#8217;d always wanted to start a company and it was like, who else is going to do this in a seriously trustworthy way and has the engineering background to actually pull it off. I think a lot of folks in the security industry are tired of the security industrial complex, if you will, which is just these small niche point products that are really, really cool tools, but not built for national priorities. We asked ourselves, who else can do this in this AI moment that&#8217;s going to get out of control? We felt like we had to start this company in a way.</p><p>What we do specifically is we build a lot of the systems that, and Anne alluded to how, I liked your comment on whether you&#8217;re trying to defend and close the doors and adjust the security controls or attack, it&#8217;s the same actual start to the workflow. That&#8217;s the workflow that we deliver in our product in a kind of hyper AI enabled way, but that doesn&#8217;t sacrifice trust and scale. And that is a lot of like Palantir software training on, you know, what was access controls for them or guardrails for security AI agents for us.</p><p>And so when I talk to a Fortune 500 CISO and I say, your ultimate measure of resilience is can you emulate a realistic and relevant adversary and know that you&#8217;re up to par, they would say, sure, but nowhere near in production because I don&#8217;t trust it. And then we go through how we&#8217;ve actually built the system and they&#8217;re like, okay, I trust it. Let&#8217;s do it. And that is basically like the technical barrier to get to what we&#8217;re talking about in terms of resilience and we build that. So it&#8217;s a lot of defensive work in the Fortune 500 and in the Government space, as you could imagine, is increasingly a lot of offensive work, whether that is red teaming, which is offense for defense effectively. How do you scale those teams that have historically been like cost prohibitively expensive, but now you can unleash them across the Department of War or the US government, but also offensive cyber operations where we want the ability to deter at machine speed for reasons of both defense and offense, but that is not something that is a toy-like product and is a very serious capability which we build.</p><p>Jai (47:37)<br>I&#8217;d also like you to kind of share with the audience the Little Tech perspective. A lot of our policy framework at a16z is built on distinguishing between the policy environment that startups face versus what a big company faces and can deal with. But talk to us a little bit about Method Security&#8211;how many people are there? What are you like as a startup company? Are you the prototypical couple of guys in a garage? Just give us some color on the nature of your company.</p><p>Sam (48:12)<br>It was basement versus garage, but we are 21 people full time as of yesterday, actually. And we&#8217;ve been keeping the company lean intentionally. Software engineers are so much more productive right now when really thinking about what is the composition of company.</p><p>Jai (48:29)<br>Is that because of AI coding?</p><p>Sam (48:31)<br>Yes. And so we&#8217;ve been trying to think through first principles like what does the engineering team look like. We&#8217;re also forward deploying a lot of our engineers into customer situations to build better products, but also to just stay engineering focused. Out of the 21, 18 of us are engineers. So we&#8217;re very engineer focused. And I think that&#8217;s ultimately our strategy around raising the ultimate ceiling of method and building tech that has compounding returns. But we are taking a very small team and trying to sell to Fortune 500, the US government and from a Little Tech perspective that is very, very hard. And it&#8217;s almost like we&#8217;re on a suicide mission.</p><p>Jai (49:11)<br>Explain that, why is that?</p><p>Sam (49:14)<br>If you have not sold, delivered and sold software to the US government before and you go into it reading all this goodness on Twitter, it&#8217;s the new thing and that&#8217;s where you should put your career, you are in from a world of hurt. And I think unless you have that pain tolerance and deep desire to deliver for the mission, you will work on it for nine months and then start pivoting to something else.</p><p>And so it&#8217;s a really long-term bet both from the company and kind of how we&#8217;re building the personality of the company because &#8211; and this is a specific Little Tech thing that we have that kind of touches on AI, the government and building a business right now &#8211; things in the commercial space, especially as it relates to AI capabilities are moving so fast. The government is increasingly even further behind in their understanding of what is possible. Even with all the work that the administration is doing to their credit, actually the gap is accelerating because the innovation is accelerating too fast. And so we have stuff that is a lot of the things that we&#8217;re talking about here, we have deployed with Fortune 500 organizations. They can emulate adversaries at scale, safely at Fortune 500-level environments.</p><p>And so, you know, instead of saying, let&#8217;s go out and to commercial and there are some change engines that might get this, but as a whole, like it&#8217;s still like a lot of inertia. So what the government will do in that case is look to do some like early R&amp;D experiments on what our capability already does in production where we&#8217;re saying like, it&#8217;s already delivered, right? You know, you don&#8217;t need to wait two or three more years to do this. It&#8217;s actually already ready.</p><p>And that is still like a huge cultural battle that we have to fight all the time. So I spend a lot of my time educating senior leaders that this is actually possible now. You can call up my Fortune 500 clients to reference here and ask if you&#8217;d like, but that has a lot to do with the color of money, you know, getting things accredited, changing what was supposed to be an early pilot with like FY28 delivery to we can actually deliver it next week.</p><p>And there&#8217;s a huge mentality change that has to happen there. And cyber is probably one of the top places where things are accelerating most quickly.</p><p>Jai (51:38)<br>I mean, it&#8217;s interesting because there has been some movement in the procurement space with the recent NDAA. One, I&#8217;d love you to sort of comment on how that may have impacted your ability to compete in this space. And two, there&#8217;s still some challenges, what you mentioned, I think the term of art is commercial-first, right? That&#8217;s the goal.</p><p>The world has changed. There was a time when for technology, the government was the principal buyer. And now things are progressing much more quickly in the private sector for a whole host of reasons. But is that kind of what you&#8217;re getting at as a sort of commercial-first principle and there&#8217;s still work to be done in the commercial-first space that would be meaningful for your business?</p><p>Sam (52:25)<br>Yeah, mean, the commercial sector is a bigger market for us, candidly. I think there&#8217;s a unique moment in time where we&#8217;re trying to put extra emphasis in the government space just because we need to figure this out and we need to figure it out today. But the commercial security market is definitely bigger as a total addressable market for us. And ultimately, I think cyber is the ultimate dual use use case, in my opinion. We want to deliver the capabilities that the government deserves and that it needs.</p><p>Yes. I know if you&#8217;re like, which side of Harvey Dent are we on that one, probably both. That is only possible if you&#8217;re commercially competing your technology with serious enterprise buyers that have no friction to buying what they want every second of every day. Otherwise it&#8217;s a gotts-like thing that&#8217;ll inevitably be a dead science project.</p><p>Jai (53:18)<br>And has the NDAA, does the NDAA have a meaningful impact on your business and what challenges remain that you would like to see addressed?</p><p>Sam (53:26)<br>It does. I think the Little Tech perspective is the top down authorities and decisions are very helpful, but it takes years for it to trickle down the defense and government apparatus to the 06 or GS 15s that are actually making the decisions. It might take two years for that culture to get down there. So it is definitely helping. And this administration is doing a very good job of injecting people that are change agents at multiple levels. But I don&#8217;t want to give people the hope that a top-down authority actually has direct effects the next day. It takes like a long time and it takes a huge battle on the inside.</p><p>Jai (54:09)<br>Are there any other sort of policy changes that you&#8217;d like to see in future NDAAs that would be meaningful for your business?</p><p>Sam (54:17)<br>I think increasing flexibility for in-year purchasing decisions for organizations is probably the most important thing to keep up with how fast commercial innovation is happening because the current program budget execution cycle is still like a three to five year cycle. And that&#8217;s just de facto three to five years behind. And so giving organizations real R&amp;D and procurement, like increased R&amp;D and procurement spend and gear that they can use at their discretion is necessary.</p><p>Jai (54:49)<br>Fantastic. Well, Sam, thank you and thank you Anne. It&#8217;s been a wonderful conversation and I appreciate your coming on the podcast.</p><p>Sam (54:54)<br>Thank you.</p><p>Anne (54:54)<br>Good to be here.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Real AI Race With China Is Who Sets the Default]]></title><description><![CDATA[A conversation with Matt Cronin on the CCP's incentives, China&#8217;s diffusion strategy, and what &#8220;winning&#8221; requires from the U.S.]]></description><link>https://a16zpolicy.substack.com/p/the-real-ai-race-with-china-is-who</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/the-real-ai-race-with-china-is-who</guid><dc:creator><![CDATA[Jai Ramaswamy]]></dc:creator><pubDate>Tue, 17 Mar 2026 13:25:53 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191148116/1612b165aba1f1e121570b1ea724a300.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In AI policy, it&#8217;s become a reflex to say we are in a global race with China. That shorthand can obscure the true nature of the competition. China and the U.S. aren&#8217;t just competing on model performance or chips, we&#8217;re competing on the next computing systems the world adopts, ultimately determining who holds economic and political power for the next generation. </p><p>In this conversation, Jai Ramaswamy, chief legal and policy officer, sits down with Matt Cronin, senior national security advisor at a16z, to make these competitive dynamics concrete. Prior to joining a16z, Cronin served as Chief Investigative Counsel and Deputy General Counsel to the U.S. House Select Committee on the Strategic Competition between the U.S. and China. He has also worked on China-related national security issues as a federal prosecutor, held senior roles at the Department of Justice, and served as Director of National Cybersecurity at the White House.</p><p>Cronin explains the factors behind the Chinese Communist Party&#8217;s AI push: its long-running view that advanced AI is a civilizational technology and the underlying belief that thriving democracies threaten the CCP&#8217;s legitimacy. Jai and Matt discuss Beijing&#8217;s strategy to subsidize and diffuse AI at scale so businesses, governments, and consumers worldwide default to Chinese systems. If the default assistants, platforms, and developer ecosystems are aligned with a totalitarian model that treats information as something to be controlled, it can shape what people see as true and permissible. As Cronin puts it, we risk &#8220;a reality where not only [is the truth] unknown, it becomes unknowable.&#8221; </p><p>The conversation shifts to what &#8220;winning&#8221; looks like for the U.S., specifically the policy levers that matter now, including open source development as a competitive advantage and defense procurement reform, where Cronin has spent significant time. Key to success is to play to America&#8217;s historic strengths: decentralized innovation, competition, and the rule of law, paired with government systems that can meet the moment. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;871633a7-81ea-4eee-bdb4-5b0dafb3a1cf&quot;,&quot;duration&quot;:null}"></div><p>Topics covered:</p><p>00:25: The Chinese Communist Party&#8217;s motivations in the AI race</p><p>03:14: Why China&#8217;s &#8220;miss&#8221; on the internet shaped its push into AI</p><p>06:54: State-led vs. market-led innovation models</p><p>09:54: What happened to China&#8217;s VC ecosystem</p><p>12:15: China&#8217;s strategy for AI diffusion and adoption</p><p>16:08: What&#8217;s at stake if China wins the AI race</p><p>22:53 The 3 key measures of US success in AI</p><p>24:31 Why open source matters for global adoption</p><p>30:11: AI policy levers that play to America&#8217;s strengths in this global race</p><p>33:21: Why defense procurement reform matters to the competitive dynamic</p><p>41:28 Final takeaways on competition, policy, and democratic advantage</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>This transcript has been edited lightly for readability.</em></p><p>Jai Ramaswamy (00:25)</p><p>Today I&#8217;d like to introduce Matt Cronin to the a16z AI Policy Brief. Matt is a Senior National Security Advisor with a16z. Matt, why don&#8217;t you introduce yourself to the audience?</p><p>Matt Cronin (00:38)</p><p>Thanks, Jai. Sure. So name&#8217;s Matt. And in terms of background, I was a federal prosecutor for years, specializing in focusing on China-related issues. And then I had national security and senior roles at DOJ, main justice in DC, the White House. And then I was chief investigative counsel on the China Select Committee before I came over here.</p><p>Jai Ramaswamy (00:58)</p><p>That&#8217;s great. Well, it&#8217;s a pleasure to have you today and honored to be here to talk about things China related. So I want to start off with an assumption and I don&#8217;t think it&#8217;s a far-fledged assumption that there really are only two nations on earth right now that have kind of expertise and advanced technology in AI foundation models. And I think given your background, at the executive branch as well as with the China Select Committee.</p><p>From what you&#8217;ve seen, what is motivating that rivalry? We oftentimes don&#8217;t really get into what&#8217;s motivating the other side, but what&#8217;s motivating the Chinese kind of drive to really dominate?</p><p>Matt Cronin (01:56)</p><p>I can talk first generally and then get into AI. So generally, the Chinese Communist Party sees democracy as antithetical to its long term survival. This is true of other totalitarian regimes. The Soviets were the same. Their concern is that when democracies thrive, it makes the legitimacy of their rule seem fake or insincere.</p><p>So the sheer fact that Taiwan, for instance, exists and you have Han Chinese that are thriving and doing well in a free society undermines their right to rule, which says you need us and our authoritarian rule for you to be prosperous and safe. So any thriving democracy to a certain extent undermines the legitimacy of their rule and they want to combat that.</p><p>In terms of AI, the Chinese, with what we really mean the Chinese Communist Party or the CCP, has seen AI as absolutely essential for years, longer than we have. Chairman Xi has stated that it is a technology of a civilizational impact is a centerpiece of what they call new quality productive forces, which makes kind of the new economy for them. And they have been investing in it, not just merely with words, they&#8217;ve been investing in it with funding and resourcing for years, for over a decade, well before we were engaged in it.</p><p>Jai Ramaswamy (03:14)</p><p>And kind of drawing out that notion that they see this as a civilizational imperative, it&#8217;s always occurred to me that one of the things that the Chinese government, and I think it&#8217;s important to distinguish, because there&#8217;s Chinese diaspora all over the world, but we&#8217;re talking about the CCP, a state here at this juncture, but that the Chinese government whiffed, in a sense, on the internet. And they recognized that. It was Western, and largely US-based technology and open source technology, quite frankly, that created everything we know about the internet. Now there&#8217;s some criticism of where the internet has gone, but if you think about it, it is truly a communication marvel. But one that embodies values very antithetical to those that the Chinese state would want to impose on its population and elsewhere. But can you explain why that miss potentially explains why they were so all in on trying to lead in AI?</p><p>Matt Cronin (04:14)</p><p>Absolutely. when the internet came out, you correctly noted, China saw it as a threat and they distanced themselves from it. And only until late in the game around when Xi came in in 2012, 2013, 14, they realized the internet was a neutral technology and could be used as a tool in favor of oppression, which is exactly against the original purpose of the internet. When AI came around, idea of it came around at least, kind of in universities and research centers, they realized that the same thing could apply here, but what if rather it&#8217;s starting than being pro-freedom, pro-democracy, pro-natural rights, it instead started a pro-totalitarian bent. And so they invested really heavily in AI to get there first, both at the frontier level, so the top-end technologies and models, but also at broad diffusion, so that the world would essentially run on an AI ecosystem ultimately controlled by the CCP.</p><p>Interestingly, their original investment, put most of their eggs in the basket of a human primate brain simulacrum. a very odd story. I don&#8217;t know if I&#8217;m breaking news here&#8230;</p><p>Jai Ramaswamy (05:34)</p><p>And to be clear, you&#8217;re not revealing any classified information.</p><p>Matt Cronin (05:38)</p><p>Oh, that&#8217;s extremely clear, it&#8217;s available, obtainable, nothing from the government. But essentially what it comes down to, and you can see some research today in the open, the Chinese bet heavily on the idea of what if we can recreate the human brain through digital means? And they did that through what is best described as disturbing experiments and research done on primates. And they bet so heavily on that that they missed the mark on the LLMs and other models that we were developing in the private sector.</p><p>If you had drawn an analogy, if you think back, you would see pictures of people trying to discover flight. All these people had wingsuits thinking, I&#8217;ll just fly like a bird. Whereas the Wright brothers correctly intuited that they could hack physics essentially through some changes to structures and different engineering techniques. We basically did the same thing in the West, United States, where clever coding tricks led us to racing ahead of them, even though they&#8217;ve invested trillions, one trillions of dollars in AI to win that race earlier on. And they were caught flat-footed, had to quickly pivot over to more of an LLM based model. They still have this kind of digital simulacrum model as well.</p><p>Jai Ramaswamy (06:54)</p><p>It&#8217;s interesting that you draw that distinction because I think it also speaks to a difference in the way that research, early stage companies are funded in both worlds and draws both the strengths and the weaknesses of each system. One thing that&#8217;d be great for you to discuss is it feels like one of the big distinctions is because the Chinese system is so state-driven, you can have some of these misfires where once the state is committed to a particular way of viewing research, if that ends up being a dry hole, you then have to reorient and repivot, whereas the model that we talk about we in the West did X and Y, but the &#8220;we&#8221; is a distributed model. It&#8217;s not like a government saying you do X, it&#8217;s more hundreds of entrepreneurs trying to figure out what&#8217;s going on and hitting on something.</p><p>It&#8217;ll be interesting for you to talk about the differences, the strengths and weaknesses of those models. And as we sort of pivot to what can the United States do, what are our particular strengths that we should be leaning into?</p><p>Matt Cronin (08:07)</p><p>Absolutely. you hit the nail on the head. For China, they do have competition within markets. Extreme competition, which led to what they currently call involution, which is a fancy word for just hyper productivity, hyper supply, because it&#8217;s all state subsidized. So that&#8217;s why you have EVs that cost like negative $5, because they subsidize the heck out of it. And a bunch of people are competing over it. So it exists to a certain degree.</p><p>But again, the competition focuses on productivity, like just making stuff, pushing stuff out. Those are the metrics. So make an LLM, have one person use it, you get state subsidies. And that can lead to incredible productivity, but because there&#8217;s less of a focus on profit, which basically just means productivity that&#8217;s efficient, it leads to just all sorts of wasted capital, inefficient models, and a lot of dry holes. It&#8217;s just like the fundamental problem of authoritarian, particularly Marxist systems.</p><p>And so, you will have all sorts of innovations, all sorts of things that will come through. And some of it is really remarkable, particularly in autonomy. They&#8217;ve done a lot of great work, but it ends up&#8230;it&#8217;s perverting the course of research in some sense by putting too many incentives in beyond either what the market can bear or what actually is there. And so it ends up kind of going to a bunch of dead ends or just no longer being productive or helpful.</p><p>On the Western side, they see what we do as extraordinarily chaotic and frankly stupid. And there&#8217;s an argument to be made to a certain extent, but because we have a dynamic system with multiple layers of competition within certain rules, rule of law strictures, which they don&#8217;t have on their side, it can allow us to rapidly compete and find solutions. It creates more aligned incentives to long-term human ingenuity.</p><p>Jai Ramaswamy (09:54)</p><p>It&#8217;s interesting because there was a time when, not too long ago, when the Chinese were trying to develop a Silicon Valley model. And it seemed that you had venture capitalists, you had others, but that seems to have gone by the wayside as well. So what accounts for that sort of transition?</p><p>Matt Cronin (10:13)</p><p>Yeah, so there&#8217;s two things there. So one, China actually had a VC community that they very carefully cultivated for decades. And about two or three years ago, when a lot of debt was coming due and the economy wasn&#8217;t doing so well, they made a big mistake where they allowed for all the VC firms to activate a clause that I think we can be bylaws required in these types of contracts, at least it was very commonly used customarily in these contracts where would say, if your company doesn&#8217;t work out, if you don&#8217;t make a profit, I can call you up and get my money back immediately.</p><p>Jai Ramaswamy (10:50)</p><p>And the government was inserting those?</p><p>Matt Cronin (10:52)</p><p>The government allowed for state-backed VCs in China to put those clauses in. And then when things are going south, all these VCs started going to founders and effectively bankrupting them immediately. Even though there was an understanding, at least implicitly, as is the case here, that if you&#8217;re backing a founder, there&#8217;s a good chance you&#8217;re not going to get an ROI. Some of them will be 1,000x, some will be zero. That&#8217;s just how it works. But in doing so, that told a lot of founders, I should not work with these VCs. I don&#8217;t want to ruin my life. And then like, you know, collect debt from me and my family, have the local government harass my family, seize my bank accounts. So that really seized up a lot of VC oriented startups and they never really recovered.</p><p>Now on the other side, there are state-backed capital models. And that, while being in many ways inefficient, has been more effective. So in the AI space, for instance, they don&#8217;t just give money. They&#8217;ll make entire industrial parks and just invite in startup companies and say, you have free power, have negligible cost, heavily subsidized and just make stuff. And the other part of it is that whereas we really focus on frontier and enterprise, they hyper focus on consumer, not just because they want the consumer to do well, because that&#8217;s like an area we&#8217;ve not focused on as much. And so they&#8217;ll...</p><p>Jai Ramaswamy (11:50)</p><p>Heavily subsidize, meaning they&#8217;re making strategic decisions as to where enterprises can have the biggest impact?</p><p>Matt Cronin (12:15)</p><p>Yeah, kind of. Also because it&#8217;s more of an internet culture and kind of a smartphone native culture in China than here. So it&#8217;s that plus the smartphone native culture. They will try to make a small area like a thousand X better due to AI. And it&#8217;s all this one company, this one state backed, directly or indirectly model will do. And then when they do that, that allows for this nationwide diffusion of AI, which is a huge goal for them.</p><p>Their goal is to get to one of the frontier models, they&#8217;ve actually have what they call a Manhattan project for them, it&#8217;s making a new lithography facility. Reuters, I think, reported on it earlier this year. That&#8217;s technically been under wraps. so it&#8217;s basically to make these chips using, Dutch technology that they don&#8217;t have access to otherwise to recreate it. So they&#8217;re trying that.</p><p>But the other side of it is that they&#8217;re trying to get broad diffusion. I think the goal is to have like 70% of all industries fully adopted AI by 2030, which is really crazy. And they&#8217;re really pushing for that. The rationale for that is many times historically, the nation that wins at the frontier, if it doesn&#8217;t win in ecosystem diffusion, will actually ultimately lose. So a historical example would be, the Germans were ahead of us in chemistry. They were the pioneers in all sorts of chemical processes and industrial chemistry ingenuity. But we as Americans, trained everyone to a high school degree of chemical understanding and then fostered development in many dozens and dozens of industries of different chemical processes. We won, they lost. Where we won would be where we were at the frontier of electricity, but then we made sure there was broad diffusion of electricity across industries. And the third point, I think is critical, is we went out of our way to make sure the electricity didn&#8217;t just benefit the elite level, it benefited everybody. It helped the entirety of democracy. For example, a washing machine in your house benefits everyone. Every single individual saw a huge gain in productivity. So they&#8217;re not necessarily doing that, but they are very much focused on ecosystem mass adoption in the country and ultimately globally.</p><p>Jai Ramaswamy (14:22)</p><p>Yeah, I can&#8217;t remember if it was Lenin or Trotsky who visited New York City in like 1905 or just at the turn of the century and was astounded at the general availability of some of these technologies to consumers in the United States.</p><p>Matt Cronin (14:44)</p><p>That&#8217;s exactly right. And so they&#8217;re trying to get that, but they&#8217;re doing it through their own model, which exacerbates their incredible debt problems, which can get to the next issue of why it&#8217;s important for them to see AI as their escape hatch to get around what are really systemic huge problems that they have. They&#8217;ve been building up, whether it&#8217;s debt, a demography time bomb, really going to get old before they get rich, all sorts of other problems like, double-digit plus unemployment particularly for the youth.</p><p>And so it&#8217;s hard for us to understand that in the West because what we see in part due to clever propaganda is we see China like dancing robots and amazing 25th century cities and bullet trains going everywhere. But at the same time, there&#8217;s over 500 million Chinese citizens that have an eighth grade or less education&#8230;huge environmental problems.</p><p>And so they see and hope AI as the means to minimize those problems, jump us in the next tier of technology and industry, and then make that there&#8217;s the global standard, which may make them economically powerful, but also politically powerful because they don&#8217;t have a pro-authoritarian bent worldwide. Because it&#8217;s a knowledge technology, not a dumb technology.</p><p>Jai Ramaswamy (16:08)</p><p>Yeah, so let&#8217;s pivot to that for a moment. What does victory look like for the United States, for China in this? I know we&#8217;re both former prosecutors, so I&#8217;m gonna violate the cardinal tenet of the interview, which is I&#8217;ll ask you a compound question. In addition to what does victory look like, what should we be thinking of as the stakes? Whether we win or not. Because I think some people would say, okay, I use a Chinese vacuum cleaner and I&#8217;ll use a Chinese model. So talk to us a little bit about the stakes.</p><p>Matt Cronin (16:38)</p><p>Yeah. One, that Chinese vacuum cleaner is spying on you. Ironically, a news report from about two weeks ago reports there is a critical flaw in Chinese vacuums that allows them to map your home and spy on you through your router. So noting that to the public.</p><p>But yes, it&#8217;s a great question, super important. So start with the stakes. It&#8217;s really important to understand that this is not a competition between, say, the United States and Japan, United States and Switzerland, that whoever wins, the rough parameters, the moral underpinnings are about the same. One country gets richer than the other. Money changes hands, no big deal.</p><p>I don&#8217;t want to overstate it, but it really is existential as to the future of humanity. Meaning, if you care about human rights, democracy, the ability to perceive objective reality, you have to be 100% committed to ensuring that America / the West wins the AI race.</p><p>What China wants is a future where a mid-level politician or a business person or engineer in Burkina Faso, India, Australia, you name it, will use by default their platform. And that platform will give them information that subtly moves them into the CCP&#8217;s orbit. Whether this is a great deal, you should follow through on it. Huawei is a wonderful organization, you should work with them. What are you talking about with Tiananmen Square? Those are just a few groups of radicals and they put it down peacefully. That&#8217;s what they want.</p><p>They want a reality where not only the truth is unknown, it becomes unknowable. And that is profoundly dangerous. We have arguably never had that sort of an existential risk to humanity in our history as a species. So we cannot let that happen.</p><p>Jai Ramaswamy (18:30)</p><p>Yeah, and I don&#8217;t think you&#8217;re overstating it. What&#8217;s interesting is I think that sometimes AI is called an industry of technology, but as we talk about at a16z, and I think Marc has been on record saying this, many people have, that AI is actually a new control layer for computing. It&#8217;s the new way that we interact with computers. We interact with them in natural language as opposed to through some sort of machine language where we have to learn a new language to communicate with them. They were now communicating in the way that we normally communicate with other human beings.</p><p>But as a result, whoever controls the nature of that control, controls to your point, the outputs and potentially reality and how we think about things. I sort of analogize it to when I was in the government, we were negotiating. on the margins of what had been called the Budapest Convention, a convention on cybercrime, which I&#8217;m sure you&#8217;re familiar with, which was how countries would share information in real time for cyber criminal activity. But it was limited, and the United States worked hard to limit it to cybercrime and cybersecurity, which were very narrowly defined things.The Russians and the Chinese at the time wanted to expand it, not to be cyber security, but information security. And information was anything that was a threat to the party, to the state. And if that had been adopted, it would have been a very, very different international system that would have been created in terms of what was being policed on the internet. And it seems like that didn&#8217;t happen with the internet, but now we have this new thing that probably will become more important than the internet and the diffusion of information.</p><p>And that&#8217;s what they&#8217;re trying to adopt is this broader notion of information security, where the state is entitled to control the types of information that people get, whether it&#8217;s domestic consumption or international.</p><p>Matt Cronin (20:34)</p><p>That&#8217;s exactly right. It&#8217;s a great analogy. And it goes back to what we were saying at the top of the hour, where for the CCP, anything about how democracy does well is a systemic challenge to their rule. Because if you can do well in democracy, you don&#8217;t need them. And so they are absolutely terrified of that. If you look at their internal or external-facing communications, they deeply care about this. And so it is absolutely essential that we don&#8217;t create a world that is safe for them. Because once they have that safe space, they do everything they can to export it abroad, and it leads to just horrific outcomes.</p><p>I remember an unrelated investigation. If you&#8217;re clever, you can find your hands on a lot of PRC procurement documents about what they&#8217;re looking for. This was back in, I think, 2023. I still recall one of them was talking about how they were developing an AI program that would identify people that could become a threat to the state, which is basically the plot of Captain America&#8217;s Winter Soldier. and it was like they legitimately like Hydra, we&#8217;re figuring out like future crime and they say, so if someone, if we took their land from them, evicted, forcibly evicted them and they&#8217;re a member of a minority or they engage in any sort of deleterious activity like graffiti or something, just detain them. There&#8217;s no due process. It&#8217;s just, that person&#8217;s a threat, put them away, disappear them. And because they could pose a threat given certain metrics. So, and then you can also identify them facial recognition anywhere in the city within under three minutes. That&#8217;s what they&#8217;re already exporting abroad to authoritarian regimes. That&#8217;s the future they want to create.</p><p>So again, whatever issues you may have with our government or any other government, the beauty of being in a free society is you can see the problems of any form of government, warts and all, just understand that you can&#8217;t let the perfect get in the way of good here. Like, sure, we have to work on that. Having a future where a US/Western AI ecosystem is the default rails would allow you to work through that and get to a more perfect union, a more perfect form of humanity, a totalitarian one is the exact opposite. No one, aside from the CCP, wants that.</p><p>Jai Ramaswamy (22:53)</p><p>So let&#8217;s pivot a little bit away from China to the U.S. What does victory for us look like in this space? What are the things that should be done? And I think perhaps most importantly, we&#8217;ve been at a16z proponents of open source. What&#8217;s at stake in the open source debate? Again, a compound question.</p><p>Matt Cronin (23:14)</p><p>Yeah, they&#8217;re two great ones. And so I would say, and I&#8217;m grabbing a lot from Trump AI action plan and executive orders, previous executive orders, just different thoughts and think tanks or national security circles. I would say there&#8217;s three goals interrelated. You have to achieve all three to get to the right outcome.</p><p>So one, you want to make sure that the United States has an unassailable lead in frontier technologies. Two, you want to make sure that the U.S./the West&#8217;s AI ecosystem is adopted by the majority of the world, and that gets the open source issue. And then three, you want to make sure that these technologies, these systems lead to a flourishing not only of democracies in a broad GDP sense, but of the general citizenry, like electricity did for everyday Americans, ultimately everyday people around the globe.</p><p>If you can achieve all three of these, would, I think, usher in a golden age for humanity. It wouldn&#8217;t be perfection, there&#8217;s no such thing as perfection, but it would be a dramatic upscaling of our capabilities, the quality of life, and what we can do as humans.</p><p>Jai Ramaswamy (24:31)</p><p>And talk a little bit about how open source fits into that. And clearly the Chinese are now at least at par with us or potentially leading us in that area. But what is at stake there? And does it matter if you&#8217;ve got the best version versus one that everybody&#8217;s using?</p><p>Matt Cronin (24:49)</p><p>That&#8217;s exactly right. so there was just research coming out, I think two or three days ago. I was just reading it this morning. So our open source models are better, but they&#8217;re not so qualitatively better that using a Chinese model doesn&#8217;t make sense. If I&#8217;m just thinking just monetarily, that&#8217;s because Chinese models cost, I think 1 40th as much on average to use per token. And that&#8217;s because the CCP subsidizes them. So right now you have this massive export from China of compute and data through these models that by regulation have to be pro CCP, whether or not it&#8217;s activated abroad yet, it&#8217;s all in there by law. And it&#8217;s going out and used throughout the rest of the world. And so you can&#8217;t blame the startups for using a platform that is currently legal and is way cheaper. The startups are by default trying to change the world and defeat a Goliath in their own native industry.</p><p>But you can fault the rest of the world for letting that happen, having similar opportunities for open source startups or open source platforms within the United States and elsewhere.</p><p>Jai Ramaswamy (26:01)</p><p>And look, our view has changed somewhat. I think there were, not too long ago, there was a worry about open source. Now I think it&#8217;s hard to draw a direct line between the uncertainty around open source that existed and Chinese leadership today, or at least on par. But nonetheless, it&#8217;s changed. I think there was more skepticism as to whether open source itself was a national security threat. And now it seems pretty clear that actually the lack of open source and not leading in open source is probably the bigger national security threat.</p><p>Matt Cronin (26:33)</p><p>Yeah, it&#8217;s for these sorts of things. You should always be really careful about definitively knowing exactly what the right answer is and knowing the future. You always have to be like the foxes versus the hedgehogs, right? You have to be a hedgehog to know the goal and where you&#8217;re going, and enough of a fox to know they&#8217;d be able to try different things. We are open to different ideas. And I think the mistake was made that we know for a fact, open source is bad, full stop. That really stifled a lot of innovation. We&#8217;re trying to fix that now.</p><p>So one of the things to fix it long-term, was to, for instance, DARPA is starting to put in these projects that fosters open source model development in specific areas. I think doing that even more broadly would be really beneficial. Doing something similar to what Roosevelt did, we created the Tennessee Valley Authority of electricity open and plentiful to all. You could do that for AI, but you can also do that for electricity. Our grid system is 60, 70 plus years old. It is rickety, it is inefficient, and right now just changing a few things in the margins can dramatically alter the availability of new critical infrastructure, new data centers, and more abundant and cheaper power.</p><p>An example would be, and Ezra Klein gets full credit from this, from the recent book, Abundance, Canada arguably has as good if not better environmental standards to us. To build the same thing in Canada and the United States, in Canada it takes about two years, here it takes seven and half years. That&#8217;s crazy. There is no reason for that. And it&#8217;s simply down to the kind of accretion of unnecessary processes put onto statutes that were made 50 plus years ago. And these processes have nothing to do with original congressional intent or even the ultimate intent that benefits the citizen. So you can, as it&#8217;s the case in Canada and all over the world, you can have completely safe environments while also reducing NSA processes that allow for building, that can allow for re-industrialization in a safe way rather than de-industrialization.</p><p>Jai Ramaswamy (28:35)</p><p>Yeah, I think that&#8217;s fair. The one thing I might take a little issue with is I think open source especially is hard for it to be government directed. In some senses, China itself, the Chinese Communist Party itself was caught by surprise, I think, by the development of DeepSeek, which actually happened over in some corner out of a hedge fund, not in the areas where the state was focused on. And that&#8217;s the way open source kind of does work. And I think also, an area where I think the United States is now leading is in kind of AI agents. OpenClaw, which is now the new thing making a ton of buzz in the industry, is an open source project. And again, it seems to me that the thing we can lean into as a country is that we create these things in different parts of our ecosystem, in different parts of the stack, and that freedom and openness in a sense is what allows us to solve new problems. So I think when it comes to electricity and things that are clearly regulated, that&#8217;s an area where a state-directed effort in open source, it&#8217;s a little bit harder to direct energy in the way that&#8217;s going to be the most efficient.</p><p>Matt Cronin (29:42)</p><p>Yeah, to clarify, it was more like the government should be removing friction to allow the private sector to engage in a dynamic process. Not say you have to say the following people are great in your model, but more just saying here&#8217;s resources, go forth. And if you&#8217;re not productive, then you go out of business that&#8217;s on you, but really giving you a good start.</p><p>Jai Ramaswamy (30:01)</p><p>Right, and in a sense, that&#8217;s our secret sauce. In a sense, we&#8217;re playing to our strengths if we do.</p><p>Matt Cronin (30:08)</p><p>Right, yeah, do not try to out China, China. That doesn&#8217;t work.</p><p>Jai Ramaswamy (30:11)</p><p>You talked a moment ago about human flourishing. Look, we live in a cynical age. I think there are a lot of people who now, I sort of joke that if we were having the debates of the 90s around the internet now, it would be very different. We were a very confident society in many ways back then, coming out of the Cold War, all of that. There&#8217;s a lot more diffidence in society today.</p><p>And so, you know, I think when people hear words like human flourishing, the eyes roll and they&#8217;re like, yeah. I tend to believe in these things and think that things like freedom of speech, things like freedom of expression, freedom of association are important, in a sense, innovations that the U.S. and the West brought to the table. But can you give any sort of concrete recommendations or thoughts around what would policy look like that really advanced human flourishing as opposed to increase the authority of states and authoritarian states?</p><p>Matt Cronin (31:20)</p><p>Sure. I think this over one of the historical analogies I went back to is thinking about the GI Bill. So the GI Bill was one of the most successful democratizing and kind of wealth distributing policies in human history, at least in American history. And all it was was saying, hey, you&#8217;ve come back from war. Thank you for your service. You can go and learn nearly any trade and whatever sort of education you want and go out and just go into the economy and make your way.</p><p>And that led to an incredible explosion in productivity, innovation, and development of the middle class, which is absolutely essential for democracy, not only just to thrive, but even to survive. So trying to make something similar to that using AI would be that alone, without going into anything more complicated, would do a tremendous amount of good.</p><p>Even now, and of course, it&#8217;s more advanced, we&#8217;re more capable at doing this, AI can allow for rapid reskilling, retraining. Anyone can reach nearly a master level of education, particularly when it&#8217;s human plus AI, like the whole centaur, like mixed capability, that would not have been possible even five years ago. So if you really kind of tune that up and push that outward, you could allow for huge amounts of society to gain huge amounts of skills that are currently gated off from colleges, graduate schools, trade schools, that would have maybe to be supplemented in some ways in certain cases. If you&#8217;re a plumber, you probably want to have some sort of a brief apprenticeship rather than just getting into the pipes in my house based purely on AI. But it would rapidly accelerate that. And similar to the GI Bill, my intuition at least, is that would lead to a greater degree of human flourishing, which perhaps is kind of a vague term, but more concretely would increase opportunity for everyday people to do things that because of the cost of education, because of these gating functions, because of geographic limitations, they just currently can&#8217;t do.</p><p>Jai Ramaswamy (33:21)</p><p>Matt, I want to pivot a little bit to an area that I know you&#8217;ve been working on more recently, which is defense procurement reform. There were some pretty interesting things that happened in the recently passed NDAA. Sorry, I&#8217;m speaking in Washington acronyms, but the National Defense Authorization Act, happens every year. And there were some important innovations in this year&#8217;s NDAA. Can you describe what happened there? And then we&#8217;ll pivot into how that could impact AI startups and kind of the ability of AI to be incorporated in some of the things that the government does.</p><p>Matt Cronin (34:00)</p><p>Sure. So the most fundamental level, there&#8217;s a recognition that something gone terribly wrong where we spend nearly a trillion dollars a year on our defense, but simultaneously we would, according to public reporting, open source calculations, would run out of critical munitions in about two weeks of major conflict against a peer adversary. That&#8217;s completely crazy. You&#8217;re spending the most on the military of any country in human history, but you run out of things to shoot in two weeks. Something&#8217;s gone horribly wrong. So you dove under the hood and we realized there are a few fundamental problems.</p><p>What it all came down to was the Defense Department, now specifically the Department of War, starting in the 1960s, created a very centralized regulatory heavy, compliance heavy system that at the end of the day, that&#8217;s kind of its ultimate evolution, made it so that it was separated out from the broader economy, broader industry. So that it was only five or so companies that had to be custom built to do business with the Pentagon. No one else could engage in the necessary regulatory compliance hoops to do business. That is an absurd result, particularly where, as is the case in our country, the majority of efficiency, innovation, productivity, and industry is outside of defense.</p><p>Jai Ramaswamy (35:19)</p><p>Now, to be fair, that&#8217;s a change in the structure of a kind of technology, right? Because there was a time post-World War II where the government was actually the biggest procurer of technology...and then behind the government, probably corporations, but the consumer was an afterthought. I think that there&#8217;s even some stories from like the 1950s or 60s where Digital Equipment Corporation, who used to create those big mainframe computers, claimed that there would only be a need for five computers in the world ever. Or something. It&#8217;s apocryphal, And so there was a reason that the structure existed that way, which was that really the government was the 800-pound gorilla in the room. But that&#8217;s changed now. And in fact, consumer applications are probably far more advanced than the government. So I assume that that&#8217;s why we need to reorient and rethink why commercial sort of needs should be or commercial uses should be relevant for the defense sector.</p><p>Matt Cronin (36:21)</p><p>That&#8217;s right. I don&#8217;t know the exact iPhone it was, something like the iPhone 12 has more computing power than an F35. Like something&#8217;s gone in a really weird direction if your top level Gen 5 fighter has less compute power than an iPhone that came out over 10 years ago. And also note, you&#8217;re really correct in terms of the evolution of it. But what&#8217;s interesting is that originally the Defense Department, the Pentagon created Silicon Valley and created it based upon what was then the incentive structure which was to fail fast, make things quickly, and to develop rapid prototyping, get things to market as fast as possible, and learn from there. The Pentagon then went in a different direction under McNamara and went to this very hyper-cautious. I&#8217;m not the plug either. I&#8217;ve written about it. Feel free to find it online. So McNamara took the Pentagon in a very different direction, actually mimicking the Soviets.</p><p>At the time after Sputnik, everyone thought that the Soviet model central command authority was better. So everyone went in that direction, the US auto industry, and then the Pentagon. Silicon Valley went in a different direction based upon how the Pentagon used to think. And it continued in that way, which in part led to all the revolutions in technology in the private sector, whereas the Pentagon just kept slowing down, slowing down, slowing down.</p><p>There&#8217;s something actually very poetic about having in some ways like the child of the Pentagon coming back to the Pentagon in a moment of crisis and saying, you need to learn, come back to how you used to do things. You need to learn what you used to know. And that&#8217;s really in part what happened here, leading to, again, not every reform was put through, but many ones that were really critically important to expanding the defense industrial base and increasing the capacity to not only develop munitions, but onboard commercial technologies to assist the warfighter.</p><p>Jai Ramaswamy (38:08)</p><p>And with that in mind, how can AI or how will AI startups, AI sort of technologies benefit from these kinds of innovations that have taken place in procurement space? And more importantly, how will the US defense industrial base benefit from that potentially new avenue of acquisition and procurement?</p><p>Matt Cronin (38:30)</p><p>Sure. So the chief goal was to make it so that if you wanted to do business with the Pentagon, you don&#8217;t have to add on a 20 % cost because of bizarre compliance burdens that have no actual bearing on helping the American taxpayer, having the solution be more safe. We&#8217;ve achieved a good part of that. In doing so, that opens up the Pentagon to companies that otherwise cannot afford or have no interest in putting on these massive compliance and auditing burdens that again have no actual purpose. Based upon that process, a lot of commercial AI companies now can onboard and have access to the Pentagon. The Pentagon in addition is now procuring not just for a specific system, but for a solution. That might sound simple, but it&#8217;s a huge difference.</p><p>Jai Ramaswamy (39:20)</p><p>What does that mean?</p><p>Matt Cronin (39:24)</p><p>Yeah, so the way the default way I work for the Pentagon would be saying I need the following item and it would give an incredibly long list that in reality would in many cases be written by Defense Prime saying I need these things and then it go to the public and saying who can make this exact thing that this guy told me to write and then the guy who told the writer would be like I can they say great and after many years of procurement hoops they got they get onboarded if instead</p><p>You have a solution or you just say, need to find a way to keep military bases safe from drones. Who can do that? I don&#8217;t care how it could be through microwave emissions. could be through kinetic strikes. It could be through hacking. Just offer me something. So that&#8217;s a kind of a simple example of where all of a sudden it&#8217;s not written for one company. It&#8217;s written for just anybody. You have these bake-offs. People can come in and just go through a series of challenges to see if they can offer this.</p><p>Jai Ramaswamy (40:17)</p><p>And interestingly enough, of looping back to the beginning of our conversation when you talked about Chinese efforts in the primate space, that would avoid the problem of somebody defining the need and then cabining who can provide the solution to that need and say, open it up to the world and say, and some people may take an AI approach to something, others may take a hard technology approach to it. And it would be sort of the Wright brothers versus the wing suits. Then somebody would win, but it wouldn&#8217;t be organically determined, I mean organically determined as opposed to being determined by the person who has in a sense skin in the game and is putting their thumb on the</p><p>Matt Cronin (40:48)</p><p>That&#8217;s exactly right. And it was just an infuriating, bizarre system that really was based on central planning. And we&#8217;re not fully moved away from it. There&#8217;s still some areas where it happens to a fair degree, but we&#8217;re getting much better at it. And again, we give huge credit to the chairs of the Senate and House Armed Services Committee that have worked on this. And both the President and the Secretary of War have worked tirelessly to push these things through despite just incredible pushback within industry, but even within kind of the middle layer of bureaucracy.</p><p>Jai Ramaswamy (41:28)</p><p>Great. Well, in the last couple of minutes, let me see if I can summarize. We&#8217;ve been through a lot of pretty interesting topics here. Let me see if I can summarize at a high level where we&#8217;ve ended up, and then I&#8217;d love for you to share any final thoughts or if I kind of got it wrong, you can let me know.</p><p>But number one, I think that it&#8217;s clear that there are very real stakes here as to whether this is a Chinese leadership, and by Chinese I mean the Chinese state leading efforts here, or whether it is the United States and its allies leading in this space.</p><p>Second, there are actually policy levers that we have. This isn&#8217;t a sort of foreordained conflict between two geopolitical rivals and we&#8217;re just gonna watch it in a Greek tragedy way play out, but there are actually actual policy levers that can be used that lean into the American way of doing things through innovation, through leveraging Silicon Valley, decentralization, and commercial leading the way as opposed to the government leading the way.</p><p>And third, that it&#8217;s really critical for the government itself and for government provision of things like defense. Candidly, we talked about education a little bit as well, but areas where traditionally the government has led, there are real opportunities for the private sector and for AI to really influence the way American society and American kind of values and priorities develop as well.</p><p>I know it&#8217;s a very high level, but is that sort of a good summary of where we&#8217;ve ended up?</p><p>Matt Cronin (43:10)</p><p>That&#8217;s dead on. The only thing I would add is that those of you listening, you&#8217;re blessed to be in a democracy. And if these things matter to you, if that&#8217;s the future that you want, then get engaged. Both at a political level, if you want to start a startup that you think could address these issues, go for it. You have the tools available to you and the freedom available to you that almost no human in history has had. And so don&#8217;t feel like you have to sit back and be just someone who&#8217;s just watching. You can actually directly engage and make a difference.</p><p>Jai Ramaswamy (43:41)</p><p>That&#8217;s great. And I love ending on that note because I think we hear a lot about what we should be worried about with AI. But the reality is that the potential for things like human flourishing for kind of releasing human spirits is enormous in this space. I want to thank you so much. This has been a great conversation. Super fun. I think you were joking a moment ago. It&#8217;s kind of like all of our conversations. It&#8217;s always a pleasure to talk to you.</p><p>Matt Cronin (44:08)</p><p>Right back at you. Thanks.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[To Regulate AI Effectively, Focus on How It’s Used]]></title><description><![CDATA[A conversation with Martin Casado on learning from past computing platform shifts, understanding marginal risk in AI, and why open source matters for US competitiveness.]]></description><link>https://a16zpolicy.substack.com/p/to-regulate-ai-effectively-focus</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/to-regulate-ai-effectively-focus</guid><dc:creator><![CDATA[Matt Perault]]></dc:creator><pubDate>Tue, 20 Jan 2026 14:29:56 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184806048/71509dd4e1caab6488d48c7460b76205.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>One of the core pillars of our <a href="https://substack.com/home/post/p-181837145?source=queue&amp;autoPlay=true">roadmap for federal AI legislation</a> makes clear AI should not excuse wrongdoing. When people or companies use AI to break the law, existing criminal, civil rights, consumer protection, and antitrust frameworks should still apply. Enforcement agencies should have the resources they need to enforce the law. If existing bodies of law fall short in accounting for certain AI use cases, any new laws should be evidence-based, clearly defining marginal risks and the optimal approach to target harms directly. </p><p>In this conversation, we go deeper on what that principle means in practice with <a href="https://a16z.com/author/martin-casado/">Martin Casado</a>, general partner at a16z where he leads the firm&#8217;s infrastructure practice and invests in advanced AI systems and foundational compute. Martin has lived through multiple platform shifts&#8212;as a researcher where he worked on large-scale simulations for the Department of Defense before working with the intelligence community on networking and cybersecurity, a pioneer of software-defined networking at Stanford, and the cofounder and CTO of Nicira, which was acquired by VMware&#8212;giving him a rare perspective on how breakthrough technologies are governed as they develop and scale. </p><p>Martin joins <a href="https://a16z.com/author/jai-ramaswamy/">Jai Ramaswamy</a> and <a href="https://x.com/MattPerault">Matt Perault</a> to discuss how decades of technology policy can inform addressing harmful uses of AI, defining marginal risk in AI, the importance of open source for long-term competitiveness, and more.  </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;10d8fcf9-c739-4fbf-a9be-4c8a7ce36a0a&quot;,&quot;duration&quot;:null}"></div><p>Topics covered:</p><p>01:55: A brief history of recent debates about how to regulate AI</p><p>12:30: Regulating use vs. development: lessons from software and cybersecurity</p><p>15:47: An open question in AI policy today: defining marginal risk</p><p>18:33: Why social media is often the wrong analogy for AI regulation</p><p>20:50: Enforcement tools available for holding bad actors to account</p><p>24:11: Balancing many trade-offs in tech policy</p><p>27:33: The role open source models play in soft power, the future of AI, and global competitiveness</p><p>38:06: Implications of regulatory uncertainty</p><p>41:32: Lawmakers want to act; what can they do now to enact effective policy?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Martin Casado (00:00)</p><p>Open source is always a critical part of the innovation ecosystem because while it&#8217;s not the number one business driver like the proprietary models are, it&#8217;s what&#8217;s used by hobbyists, it&#8217;s what&#8217;s used by academics, it&#8217;s what&#8217;s used by startups. And that tends to be the future. And so this uncertainty in the regulatory environment is keeping U.S. companies from releasing open source models that are strong. And as a result, the next generation, the hobbyists and the academics are using Chinese models. And I think that&#8217;s actually a very dangerous situation for the United States to be in.</p><p>Jai Ramaswamy (00:29)</p><p>To claim that at the outset of the internet, you could have foreseen how social media would develop, be used and misused is kind of a fairy tale. Like that couldn&#8217;t have happened back then. It can only happen once the risks emerge and are known. And then you can figure out what the bad things are that you want to regulate.</p><p>Martin Casado (00:47)</p><p>If we focus on development and we don&#8217;t focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that&#8217;s being developed. And right now there actually is no single definition for AI. And everyone we&#8217;ve used now looks totally silly because it&#8217;s evolving so quickly. So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things.</p><p>Matt Perault (01:16)</p><p>This is a fun conversation for me because I get to ask Martin and Jai some questions about how you guys were thinking about AI policy before I joined the firm.</p><p>So a couple of years ago, the scene was really different than it is today. Sam Altman&#8217;s testifying in Congress, Brad Smith at Microsoft is talking about things like licensing regimes for AI, an international regulatory agency that would regulate AI just like nuclear, international nuclear regulatory agencies do. Jai, can you just start with telling us a little bit about how the firm reacted to that. Like, how did we put that in context in terms of what AI policy might look like and what we were concerned about?</p><p>Jai Ramaswamy (01:55)</p><p>I think that for us, the big eye-opener was the Biden executive order that came out at the tail end of the Biden administration. And that order did two things that I think seemed very different to us than what had come before in the regulation of software, of computing.</p><p>The first thing, it made a nod in the direction of wanting to regulate fundamental math and computing power through restrictions on the types of models that would use certain amounts of computing power, right? Flop thresholds, I think it came to be called.</p><p>And the second one was for the first time, a questioning of the value of open source software, or as they called it, models with open weights. And the reason that that was such a shock to, I think, many people who had been involved, know, Martin is amongst them, but I think Marc as well, was involved in earlier debates around the regulation of the internet, regulation of software, regulation of encryption.</p><p>And what I think was a skepticism or at least a perceived skepticism that the way that we had regulated software before, which was really to focus on regulating use cases as opposed to regulating underlying software development. And in the case of AI, that&#8217;s regulating model layer development. And that&#8217;s one of the reasons we became so actively involved because that distinction that it served the country so well in terms of regulating uses, it has a long history in regulatory law. We have typically regulated behaviors, human behaviors, and bad behaviors typically, as opposed to regulating invention, creation, and the development of things. And I think to go down that path raises some real problems in terms of trying to develop new innovative industries without really solving the fundamental problem of bad actors using technologies to do bad things, which is, think, the thing that people worry about.</p><p>And the way that I would analogize this is if you think about what one of the biggest risks that we face in computing today, it&#8217;s malware, it&#8217;s cyber criminals, it&#8217;s bad actors. And think about the way that we address that. Today, the creation of malware itself isn&#8217;t in fact a crime. What&#8217;s a crime is the transmission of software to compromise other computers. And the reason is very simple in that a lot of the techniques, a lot of the coding that goes into creating malware can be used for pen testing, right? It hardens our system. It can be used for other things. And so it&#8217;s very hard to distinguish at the programming layer, at the model layer, good uses from bad uses. But we can attack bad uses themselves. And that&#8217;s what the Computer Fraud and Abuse Act does. And it&#8217;s what we&#8217;ve historically done in this space. And it&#8217;s served us well. There are still bad actors that get through, but we&#8217;ve been able to address those concerns by focusing on bad activity.</p><p>Martin will get into this more because he&#8217;s our guru on most of these things. But it&#8217;s even harder here where my understanding is that the coding involved here is actually relatively simple. Relatively not for me, but for folks like Martin. The math here is relatively simple. Again, not for a lawyer, but for people who are in math. It&#8217;s vectors and linear algebra. And so to try to regulate that in the United States is kind of a fool&#8217;s errand because it will be developed elsewhere and to the detriment of the ability of our industry to innovate. And it won&#8217;t fundamentally alter the ability of bad people to use these things. And so I think this focus is a really, really important one. And we shouldn&#8217;t lose sight on the fact that regulating models is not a particularly effective way of achieving what we want to achieve. And at the same time, it really hampers innovation. And so you want to do something that&#8217;s effective, that achieves the ends you want to achieve. And I think that&#8217;s where we should be heading. But I&#8217;ll let Martin speak a little bit more to why that&#8217;s the case with this particular technology, because I think there&#8217;s some really interesting things that people would love to learn.</p><p>Matt Perault (07:12)</p><p>Martin, I mean, this is a way to approach regulation that&#8217;s not just a little appealing to policymakers, it&#8217;s really appealing to policymakers. Like the laws that we are weighing in on often regulate development. They very rarely regulate use. And so if I can try to channel my inner policymaker and give voice to, think what they have in mind, I think the idea is we regulate cars at the state and federal level. We regulate the airline industry at the federal level. You understand software and software regulation really deeply. Why is that model of regulation not the right one when it comes to AI?</p><p>Martin Casado (07:51)</p><p>So I think that there&#8217;s two considerations I want to put out there, given the last couple of years. The first one is when we got involved in policy efforts, the conversation was so lopsided. So this isn&#8217;t directly answering your question. I&#8217;ll get to your question in a second. But it&#8217;s really important to point out that software regulation discourse goes back since the beginning of computers. We&#8217;ve dealt with some really heavy stuff, like can you use compute to make nuclear weapons?</p><p>What is the implication of like the internet, right? We&#8217;ve dealt with some really heavy stuff when we&#8217;ve got a lot of policy as a result of that, and a lot of doctrines and principles around that. And the weird thing about the conversation, at least last year in the last two years is normally you&#8217;ve got this kind of robust discourse with many  voices represented, for example, academia has historically been pro innovation and pro research. Venture capital has been pro innovation and supporting Little Tech. And then of course you have policymakers, you have big companies. They have this very robust conversation. And then as a result of that, you kind of figure out what makes sense for everybody. And the strange thing about this conversation is when we entered it, like once I just was not represented at all, like academia was basically silent.</p><p>VCs were doing this very strange thing that they&#8217;re kind of anti-innovation, which I&#8217;ve actually never seen before. And so just to begin with, I want us to all realize that where we are right now has not been the result of a robust conversation. It&#8217;s been very one sided. We can talk about why that was, like, and so a lot of what we&#8217;re going to say here is just being like, we need all the voices in the room. We need to get back to equilibrium. This is how we&#8217;ve done everything in the past, right?</p><p>So I&#8217;m gonna talk directly to your question. just wanna make sure like, you know, everybody understands a lot of the reason we got involved is to be like, hey, wait, like maybe academia should be part of this conversation. Maybe a Little Tech should be part of this conversation. VCs which have historically been pro innovation, maybe more of us should go ahead and talk.</p><p>So the second question direct into your question is, so how have we handled these conversations in the past?</p><p>Well, first, the underlying doctrine has been, we&#8217;ve been regulating software for decades and we have these platform shifts and to build effective policy, we have to understand the marginal differences. That&#8217;s been the way that we&#8217;ve done in the past. I&#8217;ll give you an example. When the internet came up, we actually had a number of big, huge shifts that happened. We had attacks that we&#8217;d never seen before of some of the underlying critical infrastructure. We even had at a nation state level, this notion of asymmetry, which means that the more that you rely on it, the more vulnerable you were, right? And so we had this entire discourse. So the discourse was not, stop working on the internet. It was, let&#8217;s understand the marginal risk. And then based on that marginal risk, let&#8217;s go ahead and come up with policy. And the reason you want to do it that way is because you trust the policy work that you&#8217;ve done to date. You trust that it still applies. You trust that there&#8217;s still these computer systems.</p><p>And if you don&#8217;t understand the marginal risk, you actually can&#8217;t come up with effective policy. You&#8217;re like, it may not work. It may be for the wrong things. It may be counterproductive.</p><p>And so there&#8217;s been this very strange almost doctrine shift in AI, where you go to experts like Dawn Song at Berkeley. And you&#8217;re like, what is the marginal risk? Like what has changed here? And Dawn Song would say, we don&#8217;t know, that&#8217;s a research problem. Well, if you don&#8217;t know that and you do regulation, like how do you know it&#8217;s even catching the bad stuff when you haven&#8217;t defined it? Or, how do you know it&#8217;s not enabling even worse stuff? And certainly regulation will potentially help her put a chill on innovation. So you don&#8217;t even understand where that trade-off goes. And so, you know, I would say we have a long-standing approach to dealing with software regulation. Historically, we&#8217;ve used that. And then as new things happen, we build policy on that.</p><p>What you&#8217;re asking is a little bit different what you&#8217;re saying about like, because I would say actually we do regulate compute a lot. You put a computer in an airplane, like clearly like there&#8217;s some regulations. So you do add regulations depending on the deployment environment. But when it comes to net new regulations dealing with very specific risks, you have to understand the risks first. And this was part of the conversation that was missing.</p><p>Matt Perault (12:31)</p><p>I&#8217;m curious how we draw the line, and this is something, Martin, that you&#8217;re sort of getting at here, like how we think about whether certain things fall on the use side of the spectrum or on the development side of the spectrum. So like, what about developers that offer applications? OpenAI builds a model and then has ChatGPT. Can you walk through how you think about regulate development, don&#8217;t regulate development, regulate use when it&#8217;s mapped on to companies that do both?</p><p>Martin Casado (13:05)</p><p>Yeah, we&#8217;ve got to be a little bit clear on like this entire conversation. There&#8217;s development, there&#8217;s use, and then there&#8217;s risks, right? So clearly, if someone came up with a methodology that was damaging and you could show that was damaging, and you could say that the actual methodology was part of the development&#8230;you&#8217;d say, don&#8217;t do that.</p><p>Matt Perault (13:36)</p><p>Yeah, fraud GPT.</p><p>Martin Casado (13:39)</p><p>Even that would be use. Let&#8217;s imagine that you could develop something that you know you couldn&#8217;t contain. We just decided that in this case, you could actually show that there is a risk that doesn&#8217;t exist with today&#8217;s computer system. We have not shown that at all, not at all. But if you did that, then you may have the conversation of saying, hey, listen, this is an entirely new thing.</p><p>But today, these are just computer systems and we have a very robust sense of regulation on top of those, right? So until that happens, there&#8217;s no point in regulating the development. Now, on the other hand, to your point, we apply these to all sorts of stuff and people use computers for all sorts of stuff. And if those happen to be illegal, then it&#8217;s very important that they obey the laws for those.</p><p>Jai Ramaswamy (14:35)</p><p>Yeah. Martin, is what you&#8217;re getting at that, and we should just cut to the chase here, colloquially things have been sort of divided into the doomers who think that we&#8217;re on the cusp of artificial and general intelligence that&#8217;s going to create such a radically new type of computing persona that it raises existential risks. And then there&#8217;s the people who are engineering these systems, many of whom are saying what we have today, not only doesn&#8217;t approach AGI, but it&#8217;s not actually going to get to AGI. And there seems to be a divide in the conversation amongst people who believe those two things, or at least profess to believe those two things. And what I hear you saying that if we were creating, and we knew we were creating Skynet, that would be one set of conversations. But where the engineering is today, actually the emerging consensus is that that&#8217;s not what we&#8217;re creating. There are enormous engineering challenges to get there. The current versions of what we have aren&#8217;t going to get there. I think Yann LeCun mentioned that recently.</p><p>Martin Casado (15:47)</p><p>So let&#8217;s tie this entire conversation together. Maybe this is the synopsis. We can all agree you should not use computers to do illegal things. So it&#8217;s totally sensible to regulate the use of computers.</p><p>That said, the building of systems, maybe you do want to regulate. Maybe you do. But in order to know even what to regulate, you would have to understand the marginal differences. Otherwise, you couldn&#8217;t even come up with a sensible policy, right? Like, what would you even describe? And that is still an open research question, right? So, this kind of summarizes the last two points that we&#8217;ve been making. It kind of unifies these two things.</p><p>And so what have we done historically as an industry? As an industry, what we do is we develop, we study the marginal risk. We&#8217;ve got a whole deep discipline in cybersecurity. If we come up with marginal risks that are different, then we implement policies around those. But right now, we don&#8217;t have that.</p><p>In fact, SB 1047 in California, which we got very involved in, was trying to regulate large models. The conclusion of which was to create an independent body for, and actually Dawn Song does such a great job because she is a, you&#8217;d even put her in the maybe &#8220;Doomer Curious&#8221; faction. So the Doomer-Curious Dawn Song professor at Berkeley, long time security researcher says, we need evidence-based policies. Otherwise we don&#8217;t know what to regulate. The question of marginal risk is still an open research question. Again, this is a world export, so let&#8217;s focus on the marginal risk. And that&#8217;s really where we are, I think, in the consensus discourse amongst many of the experts. But that&#8217;s not what percolates up to the policy level.</p><p>Matt Perault (17:54)</p><p>I really like your line about trusting the policy process to date, because I actually think that&#8217;s where a lot of the disagreement is. People who have comfort that existing law will at least provide some ways to address risk and marginal risk and then people who don&#8217;t. And then also I think there&#8217;s some part of this community who have said like, we tried the wait and see, let&#8217;s look for research in social media and that didn&#8217;t work. And so I&#8217;m curious how you think about both of those components, like.</p><p>What gives you confidence about the policy process to date? And Jai I&#8217;d love your thoughts on that as well. And then also like when people say, let&#8217;s not repeat the mistakes of social media. What are your thoughts on that?</p><p>Martin Casado (18:33)</p><p>I just want to make sure that we pose the question correctly because this one is going to be very easy to conflate, which is, there was no innovation for social media.</p><p>I mean, this is literally like the internet and then people, there was a use and that use was social media. So in the context of this question, would you give up the entire internet, rather than regulate social media? If it turns out social media is bad. I think the broad consensus is social media may be bad for minors. Maybe we need protections there. Social media may be bad for different political systems. Maybe we need protections there. But that to me is very much a use of the internet.</p><p>I don&#8217;t know, at least I don&#8217;t know anybody who&#8217;s sensible who is like, we should never have created the internet because of social media. I think that would be a vastly minority opinion, so I just wanna make sure that we&#8217;re not answering the wrong question here.</p><p>Jai Ramaswamy (19:25)</p><p>And Martin, of riffing off what you said before, I think that&#8217;s a really good example, Matt, where bring yourself back to 1998 or even 2000, or even when, Facebook, the college version launched. What would you regulate? Because everything that&#8217;s happened since then was just a glint in the eye of all of these people. Nobody had any conception of what social media was going to become.</p><p>I think to Martin&#8217;s point, one, to claim that at the outset of the internet, you could have foreseen how social media would develop, be used and misused is kind of a fairy tale. Like that couldn&#8217;t have happened back then. It can only happen once the risks emerge and are known, and then you can figure out what the bad things are that you wanna regulate. And then you can have a robust conversation amongst different stakeholders.</p><p>Martin Casado (20:23)</p><p>And you could even say, like, listen, we were not aggressive enough. But again, that is about the use, right? Nobody is being like, you should have not done internet research. Nobody&#8217;s saying that. But that&#8217;s what we&#8217;re saying now. We&#8217;re like, you should not do the core research. They&#8217;re not talking about, like, the actual use and application of these things. And so I think anybody that thinks we got it wrong in social networking is drawing the wrong parallel.</p><p>Matt Perault (20:50)</p><p>So Jai, what about the enforcement tools side? You amongst the three of us are the only one who&#8217;s prosecuted cases, who&#8217;s put together cases, who&#8217;s looked to the legal arsenal that you might be able to utilize to try to address harms. What gives you, to use Martin&#8217;s phrasing, what gives you confidence and trust in the policy process that we&#8217;ve had to date?</p><p>Jai Ramaswamy (21:12)</p><p>I think that it&#8217;s just the fact that we become smarter at these technologies as we understand how they&#8217;re used. Going back to the early days when the justice department was trying to fight cyber crime, you know, there was a particular unit within the FBI through which all kind of forensic data and forensic tools funneled when you seized a hard drive.</p><p>And these units were few and far between, but as cyber crime became prolific in all types of crime, the tools that investigators had, and this was on the criminal side, we can talk about regulatory separately, but on the criminal side, the tools that every investigator had, had to expand. And so today you&#8217;re probably not a white collar investigator or a white collar agent of any sort, unless you&#8217;ve got forensic tools and some sort of cyber experience in your background.</p><p>And we know now a lot better how to investigate these cases, how to find perpetrators. The regulatory side, similarly, we know that in the early days, there was a lot of fear of encryption, right? And Martin, I think lived through these debates at the time.</p><p>Law enforcement was implacably opposed to encryption, wanted back doors built into it. And all the technologists said, that&#8217;s a mistake because backdoors aren&#8217;t just used by good guys. They&#8217;re used by bad guys. and not to mention that the internet is fairly porous. You&#8217;re going to need encryption if you want to do things like e-commerce or send private information over the internet. and, and that has borne out like e-commerce on the internet would be impossible without robust encryption today.</p><p>And what solved the debate in some senses was PGP to a certain extent, once it became prevalent and useful and people could actually understand it. Has the encryption problem gone away? No, there&#8217;s still conflicts between law enforcement and tech companies, but we figured out ways of navigating through this not perfect, but ways that don&#8217;t hamper innovation that foster that don&#8217;t kind of throw the internet out just because a lot of bad stuff happens on it. And we&#8217;ve learned to coexist with these tools through that.</p><p>So that experience gives me some confidence that we can work our way through this. Is it going to be messy? Are there going to be risks associated with this? There are, and there always are risks, and I don&#8217;t mean to minimize the risks. But you have to deal with those risks as they actualize themselves, as they emerge, because the ability of the human mind to conceive of those risks in a vacuum and what you concretely do about those risks is very difficult. It&#8217;s hard to know how you would actually manage those things without understanding what the implications of them are. And I think we&#8217;ve done that in various ways.</p><p>Martin Casado (24:11)</p><p>Yeah, I think this is exactly right. And I think a good way to metaframe what Jai just said, which I totally agree with, is, we&#8217;ve hit this equilibrium, where we&#8217;re balancing a lot of things, like what good guys can do versus what bad guys can do, right? Like this is an equilibrium state, innovation versus safety. This is an equilibrium state. And we&#8217;ve developed, a number of policies to do this equilibrium state. And within that equilibrium state there&#8217;s a lot of work we can do. Like we may decide certain applications of AI are bad. Like there&#8217;s a lot of stuff we can do in equilibrium state.</p><p>The question is, do we know enough to change that equilibrium state? Do we know enough to handicap the good guys when the bad people have access to technology? Do we have enough knowledge to handicap medical innovation for some precautionary concern?</p><p>I think that the reason that we&#8217;re banging the drum and we have been is not that we have some ideological bent one way or the other. I think the reason is, there&#8217;s this equilibrium that balances a lot of trade-offs and you should consider all of these trade-offs. And if you&#8217;re going to change that equilibrium that could impact innovation, that could impact medicine, that could impact our ability to defend ourselves, you need really good justification.</p><p>And a good justification is not, well, maybe it&#8217;s dangerous, bcause this harkens back to the precautionary principle, which has just not worked for innovation, certainly in modern times. It really is an ideological or methodological concern, not an issue-specific concern that we have.</p><p>Jai Ramaswamy (25:46)</p><p>Yes. And there are two data points that are really important. One is, look, who was the first out of the box with AI regulations? It was Europe, right? They put something in place that has had a hugely detrimental impact. And they&#8217;ve had to walk a lot of it back.</p><p>The EU just recently came out with a recognition that the AI framework is flawed and they need to walk it back. Now, I don&#8217;t think they&#8217;re going to repeal every single regulation on the books, but it shows the dangers of moving into this world. And now the thing that they&#8217;re most concerned about is that the US and China will be producing all the models and Europe won&#8217;t and their people won&#8217;t even have access to some of those things.</p><p>Martin Casado (26:52)</p><p>I think it&#8217;s exactly right. You look at how the equilibrium is, we have played with equilibrium. We have chilled development. We&#8217;ve had a different approach. And I think the net of it in retrospect is we&#8217;ve given China a head start. And now if you look at the use of open source AI models, just as one sector, I&#8217;m not saying open source is particularly good about it. Just look at that as one indicative sector, they are currently dominant. And so I just feel like we have lost the ability to consider all implications of this. And that&#8217;s what we&#8217;re just trying to get a robust conversation back in.</p><p>Jai Ramaswamy (27:33)</p><p>Yeah, Martin, can I ask you a question on that? I&#8217;d love to follow up on that question, the open source question, because it seems to me that, you know, if you wanted to take a skeptical view of this, you could be like, well, you know, China was just better at this stuff. It doesn&#8217;t really have to do with regulation. But I think you&#8217;ve been seeing what happens on the ground with researchers when they&#8217;re faced with this kind of uncertain regulatory environment and what it does.</p><p>So could you kind of put some concreteness behind that. As we&#8217;ve spoken on other podcasts, we have, you know, 1200 bills in the States. We had a Biden EO that existed, no longer exists, but we have now federal legislation that may come into bear and there&#8217;s a bunch of uncertainty. So how does that play into people who are developing this software and how does it lead to a chilling effect? Because I just think that some sort of concreteness around that would be helpful for people to understand.</p><p>Martin Casado (28:31)</p><p>Yeah, and I listen, I don&#8217;t want to trivialize a situation. These are incredibly complex. They&#8217;re multifarious. They include a lot of stuff, right? And so like, you know, and I think to try and reduce it to, it&#8217;s just regulation is incorrect, right? But I do think it matters.</p><p>So let me start by saying the leader in AI is the United States, the United States closed source models from OpenAI and Anthropic and Google and the rest are the best in the world and nobody&#8217;s even close, right? But, you know, closed source or proprietary systems tend to be the primary capital drivers as far as business, but they&#8217;re not the primary innovation drivers  for hobbyists, researchers, academics. A lot of the industry tends to get pushed forward from open source and that ends up becoming the future, right? We saw this with operating systems, Linux very famously. So it&#8217;s a very, very important part of the ecosystem.</p><p>Matt Perault (29:24)</p><p>So can you give us a sense of like, what&#8217;s the impact of regulation on open source development? Open source seemed to be under threat a couple years ago, then there&#8217;s this DeepSeek moment and everyone sort of backed off.</p><p>Martin Casado (29:36)</p><p>It&#8217;s really weird because the people talking against open source, were not the people you&#8217;d expect. It&#8217;s like researchers&#8230;these have always been the champions of open source&#8230;VCs. And so it&#8217;s been a very strange time.</p><p>So here&#8217;s the thing. The United States is the lead in AI by far. Like we have the best models, have the best proprietary models, we have the best [models], but they tend to be proprietary, right? And China has done a great job with open source, which you can actually expect from a number two, right? This is the classic case in technology is like the leader does the proprietary version. You can run faster that way. You can verticalize better that way. And then the number two, well, we&#8217;ll do the open source as an approach.</p><p>The question is, why have we not caught up or why don&#8217;t we have anything right now? China&#8217;s running away with open source models. I think a lot of it has to do with regulations and legal action due to things like copyright. So for example, when you do see these big labs release open source models and they&#8217;re the ones that would release the best ones. A lot of the time you could tell they were generated with synthetic data. And why would they do that? Well, I would surmise that the reason you do that is because you don&#8217;t want somebody&#8230; and we have a lot of spurious lawsuits like this&#8230;just basically trying to pull out proprietary images and then suing the company. So there&#8217;s actually a lot of risks to putting out open source models if you&#8217;re a US company. And so I think we&#8217;re already seeing a bit of a chilling effect based on the uncertainty.</p><p>And I would say more than any given rule, the uncertainty of where the landscape is going to go has had a chilling effect on our big labs and our big companies, which all are the leading organizations in the world from putting out open source models. So I think that&#8217;s very, very clear at this point. So instead they stay proprietary. So we&#8217;re still in the lead. But the problem is this open source is always a critical part of the innovation ecosystem. Because while it&#8217;s not the number one business driver, like the proprietary models are, it&#8217;s what&#8217;s used by hobbyists, it&#8217;s what&#8217;s used by academics, it&#8217;s used by startups. And that tends to be the future. And so this uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong, which is very, very clear. And as a result, the next generation, the hobbyists and the academics are using Chinese models. And I think that&#8217;s actually a very dangerous situation for the United States to be in.</p><p>Matt Perault (32:06)</p><p>Can you talk a little bit about the timeline for when you see the effects of this? One thing that I feel like we often hear in tech policy is people will say like such and such happened and the sky didn&#8217;t fal.</p><p>Martin Casado (32:20)</p><p>So my job is to sit and companies come in and they pitch us the next generation of companies. And of those, one in 10 is going to do a great job and one in 20 is going to be a very successful company and maybe one in 50 is going to be a public company. So this is a view into the future. And if they&#8217;re using an open source model, I would say 80% of the time it is a Chinese model. And listen, we can opine on why that&#8217;s dangerous. mean, here&#8217;s a very simplistic description without going into sci fi or any sort of, you know, scary instance. You know, if I&#8217;m a Chinese company that&#8217;s providing open source models, and I want to have an advantage, I just keep the largest model and give myself a six month advantage with it before I release it again. And everybody&#8217;s dependent now on you. And you have a built in headstart for the most powerful models as they come out. It&#8217;s just so trivial to see why this is a bad idea for the United States and a bad idea for the ecosystem.</p><p>And so I will say today it is absolutely the case that they are dominant for open source models, just open source models, but that&#8217;s such an important piece. The advantage again, without going into like scariness about backdoors or any of the other stuff, literally just release cadence. They can put United States startups behind.</p><p>Jai Ramaswamy (33:34)</p><p>There&#8217;s a political component too that we just have to be aware of, which is that it matters very much, not just for the dominance of a particular country or a particular set of industries that the U.S. leads in this area. If you think about the implications that open source had more broadly for soft power, for open societies, I think of AI as that on steroids, because it operates as sort of a control layer to computing and will be the interface that human beings interact with computers going forward, it can shape the way that information is produced and shape the way that information is perceived. And so I do think that it matters very much whether the advantage goes to China and models that are mbued with its values versus a much more kind of open view of information and the way information should be kind of shared and disseminated. And look, it&#8217;s a trivial example, but I think an important one in terms of the Tiananmen Square and how that&#8217;s represented in Chinese models versus elsewhere. And I think that you can see that playing its way out over the decades as AI becomes more important. So there&#8217;s a geopolitical angle here that the US really should be taking very, very seriously. And in some ways, China&#8217;s playing, it feels to me, at least a different game and a game that they played with Huawei, with others, which is they understand that adoption is the thing that matters, right? If you want to create network effects, if you want to create dominance, adoption matters and open source is one way of achieving that adoption.</p><p>And  while we&#8217;re bickering about, you know, what States should regulate, whether the federal government should do this or that&#8230;they&#8217;re off achieving a level of penetration and a level of adoption that  puts them ahead. And even if they&#8217;re not the best models, but the best models that people want to use, it&#8217;s kind of like the beta max versus VHS, you know, it doesn&#8217;t matter in some senses. The best model is one that&#8217;s cheap enough that people can use and can put into their products. And that will have implications down the road. And I think that that&#8217;s a super important risk that&#8217;s sometimes underestimated as we talk about the other risks.</p><p>And so to Martin&#8217;s point, policymaking is a trade off always. There are always puts and takes. And, you know, as somebody who was sort of studied economics early in my, my academic career, it tells you that there are no perfect solutions. There are just equilibria of trade-offs. So</p><p>I think robust debate is the important thing. And I do think that there are large portions of this that aren&#8217;t being debated enough in furtherance of risks that feel at least more remote than the risks, the prosaic risks that we&#8217;re seeing happening every day and that we should be addressing more urgently. And this isn&#8217;t a call for no regulation. I think that&#8217;s the canard that&#8217;s often thrown around, but what&#8217;s going to be effective and what&#8217;s the right balance between different competing objectives?</p><p>Martin Casado (36:56)</p><p>Yeah, this is the perfect example of why the equilibrium is so hard. And for even the smartest people, it doesn&#8217;t follow intuition or reason, and therefore we need to focus more on precedents, right? So there&#8217;s this question, should we focus on precedents? This is a great example. So historically, we would not be anti-open source, historically, at least for like, you know, a large portion of the constituency.</p><p>And some very, very smart, you know, knowledgeable people were anti-open source in this wave. And they would write articles and they&#8217;d say, hey, listen, because if we do open source, we will enable China. So it&#8217;s a departure from historical precedents. The rationale, which appeals to intuition, is that we&#8217;re enabling China. And the result is, is counterproductive. China actually got ahead. Right. And so I think when we have these new platform shifts with new technologies, we should follow what we&#8217;ve done in the past. And then if we&#8217;re going to have a departure, we should really understand why it&#8217;s different in order to do that because these equilibriums are so tough to understand outside of.</p><p>Matt Perault (38:06)</p><p>So why does all this matter for Little Tech? That&#8217;s the focus for us. So when we talk about regulating use versus regulating development, what is the reason that development focused regulation and the kinds of stuff we&#8217;ve seen is like Flops-based thresholds or impact assessments or audits or determining liability, kinds of of proposals that really like affect people at the stage of the company development that Martin you see so much like people trying to get.</p><p>Martin Casado (38:33)</p><p>I mean, the  bottom line is uncertainty really is death in startups. And we see it all the time. So for example, in the last two weeks, I actually had a very AI forward VC pull a term sheet from a startup, which is probably going to kill the startup because they&#8217;re just uncertain about the regulatory environment. And so I think it&#8217;s not the regulation yet, even that&#8217;s problematic.</p><p>The problem is, because we don&#8217;t know what to regulate, because we don&#8217;t know what the marginal risk is, there&#8217;s just a bunch of proposals out there. We don&#8217;t even know where they&#8217;re going to land. We don&#8217;t know how to think about it. We have no framework. And that has really chilled, certainly the funding environment. But there&#8217;s also the truth for the customer adoption environment. It&#8217;s true for the hiring environment, right? I will tell you, I hire engineers all the time into AI startups. The regulatory question comes up all of the time. There&#8217;s kind of no aspect of a startup that isn&#8217;t impacted by uncertainty in the regulatory environment. It&#8217;s exactly the environment we&#8217;re providing.</p><p>Jai Ramaswamy (39:38)</p><p>I think that the important thing from a startup&#8217;s perspective is that they focus on building things, right? Like the thing that the startup is most worried about is not existing tomorrow. And obviously startups don&#8217;t want to break the law. So they&#8217;re going to follow whatever laws are on the books. The problem becomes what the opportunity cost of that is. The reality is that if you&#8217;re a large company and have resources and are well-funded and can deploy armies of lawyers, your innovation will slow down, but it&#8217;s not going to stop. And if you&#8217;re a startup and you&#8217;re just a couple of people, it becomes very, very hard to operate in an environment that has layers and layers of regulations, different regulations with different states layered on top of uncertainty at the federal and state level. You&#8217;re sort of left with, okay, what can I do and what can&#8217;t I do when all I want to do is build?</p><p>And that really does give an enormous advantage to incumbent players that are well-resourced, that have revenue streams, that have adjacent products where they already have a product market-fit. If you&#8217;re Google, you&#8217;ve got your advertising cash cow. So, you can do whatever you need to do to invest in this space. But if you don&#8217;t have anything and you&#8217;re trying to create an industry from scratch, to me, at least that seems to speak for itself in terms of how those two types of companies would fare in a regulatory environment that&#8217;s uncertain and that&#8217;s prohibitive or at least dissuasive of innovation.</p><p>Matt Perault (41:13)</p><p>Jai you&#8217;re articulating the things that we should stay away from, regulating development, regulating the underlying innovation. When you&#8217;re sitting across from a lawmaker who says, I&#8217;m hearing from constituents that they&#8217;re concerned about this technology, I feel an urgent need to do something to protect my constituents. What are the kinds of things that you say in response?</p><p>Jai Ramaswamy (41:32)</p><p>I think it&#8217;s a two-fold inquiry. think the first thing to really explain, there are a lot of regulations and laws that already apply to AI. There are general purpose, use restrictions on all kinds of bad activity, and you don&#8217;t get a pass from them just because you happen to be using AI now.</p><p>And candidly, if you were to create a very specific LLM-focused model of abuse, in about a generation of models, it&#8217;s going to be irrelevant because the definition won&#8217;t apply to the world models that are now being produced or whatever is going to come after that. And so I do think that the response is to say, look, let&#8217;s figure out what is prohibited today and what you&#8217;re afraid of. In Martin&#8217;s language, the marginal risk, right? What is not covered today by the laws that we have in the books? You can&#8217;t discriminate using models. You can&#8217;t do unfair lending using models. can&#8217;t...you know, differentially higher. And we&#8217;ve seen examples where hiring was affected by these models and they were pulled back. All of those laws still apply and you don&#8217;t get a pass just because you have an AI model that&#8217;s being used to implement your practice. So that&#8217;s the number one thing. And so where are the gaps in those laws? And let&#8217;s identify those gaps. And there may very well be them, but let&#8217;s see what they are. And then sure, you should pass laws.</p><p>And this is the key part of it. You should pass laws of general applicability that are technology neutral so they don&#8217;t become obsolete pretty quickly, especially with a technology that&#8217;s innovating so rapidly. You want to have a general use, tech-neutral types of laws. Let&#8217;s figure out what those gaps are and let&#8217;s pass those laws. And those will tend to be use-based laws for the most part.</p><p>And again, this kind of gets back into a question of federal state relationships that I know you and I have written about. Some of those are gonna be passed by the states. States have plenary police powers to protect their citizens for consumer protection purposes, for all sorts of purposes. Some of that will be federal and that balance between what the federal government should do and what the state government should do will be sort of hashed out. But you&#8217;ve got to focus on where the gaps are in the law as opposed to assuming that, you know, I got to run, I got to do something, I got to pass something, which might end up confusing things more than anything else. I think that&#8217;s the key is, what is not covered today that you want to be covered and how do you put a framework in place that, that addresses the bad uses, the misuses of, of these things.</p><p>Martin Casado (44:12)</p><p>I think the last thing that is really important to impress on lawmakers is if we focus on development and we don&#8217;t focus on use, you end up introducing tremendous loopholes &#8275; because it requires you to describe the system that&#8217;s being developed. And right now there actually is no single definition for AI. And everyone we&#8217;ve used in the path now looks totally silly because it&#8217;s evolving so quickly.</p><p>So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things. And so it&#8217;s kind of a fool&#8217;s errand, even on the surface of it, to try and do it based on a set of actions, because it is just not a single system.</p><p>Matt Perault (44:57)</p><p>Jai, Martin, thanks so much.</p><p>Jai Ramaswamy (44:59)</p><p>It&#8217;s been a pleasure. Thanks, Matt.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p><p></p>]]></content:encoded></item><item><title><![CDATA[A Roadmap for Federal AI Legislation: Protect People, Empower Builders, Win the Future]]></title><description><![CDATA[Listen now | Debates in Washington often frame AI governance as a series of false choices: they pit innovation against safety, progress against protection, federal leadership against the rights of states.]]></description><link>https://a16zpolicy.substack.com/p/a-roadmap-for-federal-ai-legislation</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/a-roadmap-for-federal-ai-legislation</guid><dc:creator><![CDATA[Jai Ramaswamy]]></dc:creator><pubDate>Wed, 17 Dec 2025 10:05:32 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181837145/7756fd3ca017346c3b2bf0815a3f03ce.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Debates in Washington often frame AI governance as a series of false choices: they pit innovation against safety, progress against protection, federal leadership against the rights of states. But at a16z, we believe these are not binaries. In order for America to realize the full promise of artificial intelligence, we must both build great products and protect people from AI-related harms. Congress can and should design a federal AI framework that protects individuals and families, while also safeguarding innovation and competition. This approach will allow startups and entrepreneurs, who we call Little Tech, to power America&#8217;s future growth while still addressing real risks.</p><p>At a16z, we take a long view. Our funds have a 10 to 20 year life cycle, which means we care about investing in trustworthy products, strong businesses, and durable markets that will still be thriving years from now. Pursuing short-term valuations at the expense of sustainable tools and healthy markets is bad for the founders we invest in, bad for the investors who trust us with their capital, bad for our firm, and most importantly, bad for American people and businesses. A boom-and-bust cycle that results in AI products that are insecure, unsafe, or misleading would be a failure, not a triumph.</p><p>Federal AI legislation should lead us in a different direction, where AI empowers people and delivers social and economic benefits. Smart regulation is essential to ensuring that AI can help our society thrive in the long run and that American startups can compete on the global stage. If there&#8217;s one thing central to this vision, it is competition: without competition, consumers get worse products, slower progress, and fewer choices. And Little Tech is central to competition: without startups, large, deep-pocketed incumbents will control the market.</p><p>The vision is clear. The question is how to achieve it.</p><p>A critical first step is enacting federal AI legislation that sets a clear standard for AI governance. We&#8217;ve <a href="https://a16z.com/category/policy-and-regulation/ai-policy/">written</a> <a href="https://a16zpolicy.substack.com/">and</a> <a href="https://www.youtube.com/watch?v=ZISvCnGmq_s">discussed</a> <a href="https://a16z.com/a16zs-recommendations-for-the-national-ai-action-plan/">what good</a> <a href="https://a16z.com/regulate-ai-use-not-ai-development/">AI policy</a> <a href="https://x.com/mattperault/status/1929644165517684937?s=46&amp;t=y9m4_DldoFl9BGSUdmLFrw">looks like</a> <a href="https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/">for Little Tech</a>, but with both Republicans and Democrats now calling for congressional action, it&#8217;s time to put the key elements in one place. The nine pillars below translate that work into a concrete policy agenda that can keep Americans safe while keeping the U.S. in the lead.</p><ol><li><p>Punish harmful uses of AI.</p></li><li><p>Protect children from AI-related harms.</p></li><li><p>Protect against catastrophic cyber and national security risks.</p></li><li><p>Establish a national standard for AI model transparency.</p></li><li><p>Ensure federal leadership in AI development, while protecting states&#8217; ability to police the harmful use of AI within their borders.</p></li><li><p>Invest in AI talent by supporting workers and educating students.</p></li><li><p>Invest in infrastructure: compute, data, and energy.</p></li><li><p>Invest in AI research.</p></li><li><p>Use AI to modernize government service delivery.</p></li></ol><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;baabd367-57b7-4afa-9ece-cbf46e683f01&quot;,&quot;duration&quot;:null}"></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2>1. Punish harmful uses of AI</h2><p>AI should not serve as a liability shield. When bad actors use AI to break the law, they should not be able to hide behind the technology. </p><p>If a person uses an AI system to commit fraud, they have still committed fraud. If a company deploys an AI tool that discriminates in hiring or housing, civil rights law should apply. If a firm uses AI in ways that are unfair or deceptive, that conduct should remain within the reach of state and federal consumer-protection law. <strong>The core principle is simple: AI is not a &#8220;get out of jail free&#8221; card.</strong></p><p>A federal AI framework should make that principle explicit:</p><ul><li><p><strong>Ensure that criminal codes, civil rights statutes, consumer protection law, and antitrust apply to cases involving AI.</strong> In many of these areas, states and the federal government have overlapping jurisdiction. In consumer protection law, for instance, the Federal Trade Commission enforces prohibitions on unfair and deceptive trade practices (UDAP), while many state attorneys general also enforce their own UDAP statutes. </p></li><li><p><strong>Direct the Justice Department and other state and federal enforcement agencies to map how those tools work in AI-related cases, identify gaps, and recommend targeted fixes where necessary.</strong> If existing bodies of law do not account for certain AI use cases, Congress may need to step in to fill them. Any new law that targets AI-related harms should focus on marginal risk, and use an evidence-based approach to identify the gaps that need to be filled and the optimal approach to filling them. </p></li><li><p><strong>Provide agencies with the resources&#8212;budget, headcount, and technical expertise&#8212;to actually bring these cases.</strong> In some cases, public-private partnerships may be valuable in providing technical expertise to ensure that prosecutors can prosecute existing law and that judges can recognize AI-based violations when they occur. </p></li></ul><p>Of course, prohibiting people and companies from using AI as a liability shield does not mean that they should be unable to defend themselves. Defendants should still be permitted to use any defenses available in statute or at common law, and in negligence cases, judges should still take account of whether defendants enacted good-faith measures and safeguards&#8212;consistent with applicable best practices for their industry and company size&#8212;in determining legal liability.</p><h2>2. Protect children from AI-related harms </h2><p>AI can harm anyone, but children are uniquely vulnerable. Minors may be less-equipped than adults to protect themselves, and when harms occur, the consequences may be more severe. Because of these vulnerabilities, lawmakers should consider enacting additional protections for children. </p><p>As with other online services, children under the age of 13 <a href="https://epic.org/issues/data-protection/childrens-privacy/">should be prohibited from using AI services, absent parental consent</a>. It should be noted that because of the challenges of obtaining consent, <a href="https://www.tiktok.com/safety/en/guardians-guide?utm_source=search_SEM_tag&amp;utm_campaign=minorsafety&amp;gad_source=1&amp;gad_campaignid=20471031496&amp;gbraid=0AAAAApbYvmzhhybj6HLNkx2jJq-IEAskV&amp;gclid=Cj0KCQiArt_JBhCTARIsADQZaylR3tAqlLbV6ZBpa66N6Qy7jY_NDCf6a_pYo9Tcn9GQIOn13Yqjwc0aAvFTEALw_wcB">most</a> <a href="https://help.instagram.com/3237561506542117">technology</a> <a href="https://help.x.com/en/rules-and-policies/information-for-parents-and-minor-users">services</a> prohibit use entirely for younger children. Other minors&#8212;anyone between ages 13 and 17 years of age&#8212;using AI tools should receive additional protections when providers know users are minors. </p><p>In those cases, providers should offer parents meaningful controls: the ability to set privacy and content settings; to impose usage limits or blackout hours; and to access basic information about how a tool is being used. Providers should also present minors with clear disclosures about what the system is and what it is not: that it is AI, not a human; that it is not a licensed professional (for instance, not a licensed mental health care provider); that it is not intended for crisis situations such as suicidal emergencies; and that it is not a replacement for licensed mental health care.</p><p>In imposing these requirements, lawmakers should be careful to avoid blanket prohibitions on minors&#8217; ability to use AI. As <a href="https://www.google.com/search?q=california+1064+veto+statement&amp;rlz=1C5GCCM_en&amp;oq=california+1064+veto+statement&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAtIBCDQ3NjBqMGo3qAIAsAIA&amp;sourceid=chrome&amp;ie=UTF-8">California Gavin Newsom said</a> when he vetoed a misguided proposal that would have severely constrained minors&#8217; ability to access and use AI products, &#8220;We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.&#8221; Lawmaking should be careful not to confuse disempowerment for protection, and should stay within constitutional bounds.</p><p>Lawmakers should also require providers to develop protocols for how they handle certain situations, such as instances where a user expresses suicidal ideation or a desire to self-harm. Providers should be required to include information in these protocols on how they will refuse to help users harm themselves&#8212;including providing information about methods for committing suicide&#8212;and how they will refer users in crisis to suicide prevention resources.</p><p>Beyond these responsibilities, lawmakers should also consider ensuring that civil and criminal penalties can be imposed in cases involving harm to minors, such as if AI is used to solicit or traffic a minor. Similarly, prohibitions on assisted suicide should not permit exemptions for cases involving AI.</p><h2>3. Protect against catastrophic cyber and national security risks</h2><p>Federal legislation should also improve the government&#8217;s understanding of AI&#8217;s marginal risks in high-stakes domains like national security. One option is to direct a technical, standard-focused federal government office to identify, test, and benchmark national security capabilities&#8212;like the use of AI in chemical, biological, radiological, and nuclear (CBRN) attacks or the ability to evade human control. That work should involve consultation with independent experts and AI researchers to understand existing risks and to establish assessment procedures. Building this type of measurement infrastructure will help ensure that policy responses are proportionate: capabilities should be managed based on evidence, not headlines. The same evidence-based approach should guide how policymakers think about AI&#8217;s role in both offensive and defensive cyber operations.</p><p>AI is poised to enhance the ability of nation-states, transnational cybercrime organizations, and lone wolves at greater scale and with increasing sophistication. As AI technologies become more accessible, they allow even those with minimal technical skills to carry out sophisticated attacks on critical infrastructure. As a result, while only the most sophisticated nation state actors and cyber-criminal organizations engage in such attacks today, AI could allow a greater number of nation-state and other threat actors to do so in the future. But unlike some technologies that create asymmetric offensive and defensive capabilities, AI does not create net-new incremental risk since AI enhances the capabilities of both attackers and defenders. A federal framework must empower, not hamstring, the defensive use of AI. Limiting our defensive strategies can create artificial asymmetries that make it easier for attackers to target critical infrastructure. </p><p>Information sharing among AI companies about the potential misuse of models for cybercrime is a critical countermeasure in combating cyberattacks but antitrust concerns can limit how much information is shared. Targeted exceptions to permit such sharing where necessary are therefore an essential safeguard. The financial system is particularly exposed to cyberattacks because of the central role it plays in monetizing such activity. Yet financial institutions are hamstrung by archaic model validation rules that frustrate the implementation of AI defenses. Legislative and regulatory changes should be enacted to remove these barriers. Finally, the government should procure and deploy state-of-the-art defensive AI solutions. </p><h2>4. Establish a national standard for model transparency</h2><p>Transparency can help people make informed choices about the AI products they use. Just as nutrition labels provide basic information that give consumers the ability to make good choices about the food they eat, disclosing a set of &#8220;AI model facts&#8221; can help people make good choices about how they use AI models.</p><p>At the same time, <a href="https://a16zpolicy.substack.com/p/ai-and-the-first-amendment">government mandates that require companies to disclose information can present challenges</a>. Government-imposed disclosure rules face constitutional constraints: they may be unconstitutional if they obligate companies to disclose information that is not factual, controversial, or unduly burdensome. Overly broad or onerous mandates are especially challenging for Little Tech, which cannot absorb compliance costs the way large incumbents can. For Little Tech, burdensome disclosure requirements threaten their ability to compete. As Jennifer Pahlka, a former White House deputy chief technology officer, <a href="https://www.recodingamerica.us/">has written</a>, &#8220;<a href="https://www.recodingamerica.us/concepts">paperwork favors the powerful.</a>&#8221;</p><p>Mandates also might be problematic if they fail to provide consumers with useful information. Transparency for transparency&#8217;s sake adds costs without adding value. Lawmakers should design any transparency obligation with people in mind: what information enables people to make decisions that are consistent with their preferences? </p><p>If the goal is to require transparency that is useful for consumers, lawful, and not unduly burdensome for startups, then lawmakers <a href="https://a16z.com/ai-model-facts-transparency-that-works-for-little-tech/">should consider requiring disclosure</a> of the following information for the developers of base models: </p><ul><li><p>Who built this model?</p></li><li><p>When was it released and what timeframe does its training data cover?</p></li><li><p>What are its intended uses and what are the modalities of input and output it supports?</p></li><li><p>What languages does it support?</p></li><li><p>What are the model&#8217;s terms of service or license?</p></li></ul><p>Less powerful models should be exempted from this requirement, and disclosures shouldn&#8217;t require a company to reveal trade secrets or model weights. </p><h2>5. Ensure federal leadership in AI development, while protecting states&#8217; ability to police harmful use of AI</h2><p>Recent debates about AI governance often present federal and state roles as mutually exclusive: the federal government has sole authority to regulate AI because it involves interstate commerce, or states have unbounded authority to regulate AI because states are laboratories of democracy and Congress has not yet enacted comprehensive AI legislation.</p><p>Neither extreme captures how the <a href="https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/">Constitution allocates power</a> between state and federal governments: both states and the federal government have important roles in regulating AI. Congress should craft rules that govern the national AI market, while states should regulate harmful uses of AI within their borders. That means that Congress should take the lead in regulating model development, since open source and proprietary tools will necessarily travel across state lines. It also means that states should have the ability to enforce their own criminal and civil laws to prohibit harmful uses of AI in areas like consumer protection, civil rights, children&#8217;s safety, and mental health. And in some areas that traditionally fall within the domain of state lawmakers, like insurance and education, states may take the lead.</p><p>A federal framework can help to clarify these respective roles by expressly establishing congressional leadership in regulating AI development, while including safe harbors to clarify that states retain the ability to regulate AI use and to adjudicate tort claims.</p><p>Clear rules help in both directions. Developers get predictable rules for building and deploying models, and states maintain the tools they need to protect residents from concrete harms.</p><h2>6. Invest in AI talent by supporting workers and educating students</h2><p>Realizing AI&#8217;s economic and social potential requires an <a href="https://a16z.com/a-policy-blueprint-for-us-investment-in-ai-talent-and-infrastructure/">AI-ready workforce</a>. That means supporting our workers and students in making the transition to an economy where success depends on possessing AI skills, just as being able to use the internet is essential for economic success today.</p><p>Supporting the transition to an AI-ready workforce includes several components:</p><ul><li><p><strong>Supporting workforce development initiatives</strong> that provide training on the use of AI technologies for workers, including by supporting reskilling and upskilling programs and by implementing partnerships with the private sector to offer industry-recognized certifications and clear on-ramps to jobs.</p></li><li><p><strong>Establishing public-private partnerships</strong> to create opportunities for AI-ready workers and to support curriculum development modeled on relevant real-world skills.</p></li><li><p><strong>Creating programs that provide certifications, apprenticeships, and internships</strong> to close the gap between classroom learning and practical, employable skills. Lawmakers should modernize the 80-year-old National Apprenticeship Act, as the current system wasn&#8217;t designed for new technologies like AI.</p></li><li><p><strong>Implementing AI literacy in K-12 curricula</strong> to empower future generations of Americans to succeed in an AI-driven economy, including by strengthening STEM education, introducing age-appropriate machine learning concepts, and promoting responsible use of AI tools.</p></li></ul><h2>7. Invest in infrastructure: compute, data, and energy</h2><p>A federal framework can play an important role making AI markets more competitive. One option is to establish a <a href="https://a16z.com/a-policy-blueprint-for-us-investment-in-ai-talent-and-infrastructure/">National AI Competitiveness Institute</a> (NAICI) that can help lower barriers to entry for entrepreneurs, small businesses, researchers, and government agencies. NAICI could offer access to compute, curated datasets, benchmarking and evaluation tools, and open-source software environments. Shared infrastructure of this kind reduces redundancy and gives smaller projects a credible way to experiment, iterate, and grow.</p><p>Open data sets might be particularly valuable. NAICI users might have access to open data repositories of non-personal data, and the government might ensure that these data sets include access to government-funded research. As part of this initiative, the government could prioritize making its own data sets available for AI training and research, where lawful and appropriate, and could create an &#8220;Open Data Commons&#8221; of data pools that are managed in the public&#8217;s interest.</p><p><a href="https://a16z.com/speed-to-power-an-energy-policy-agenda-for-a-thriving-ai-market/">Energy</a> is another structural constraint. Large-scale AI models are compute- and energy-intensive, so a federal framework should help to increase energy abundance, while also ensuring that startups are not priced out or crowded out. Energy policies should be structured so that neither consumers nor Little Tech is saddled with the costs of hyperscalers&#8217; energy needs without seeing commensurate benefits.  </p><h2>8. Invest in AI research</h2><p>The relationship between academic research and AI product development has always been tight. Breakthroughs in universities and public labs often seed the companies and tools that define each new generation of technology. Supporting that research is therefore critical to long-term innovation in both the public and private sectors.</p><p>Government support should prioritize foundational and disruptive AI research. That could include dedicated funding streams for moonshot projects&#8212;high-risk, high-reward efforts that challenge current paradigms&#8212;and a balanced portfolio that spans near-term, medium-term, and long-term horizons.</p><p>Promising topics range widely: how to design effective worker-retraining programs for an AI-intensive economy; the role of open-source tools in promoting competition and security; the use of AI to defend against cyber threats; and the potential for AI to improve the delivery of government services. Structured, public research on these questions can inform policy and shape more effective products.</p><p>To maximize impact, federal grants should, where possible, require that non-sensitive research data be shared in machine-readable formats under licenses that permit AI training and evaluation. Making this research available will turn public funding into public infrastructure.</p><h2>9. Use AI to modernize government service delivery</h2><p>AI has the potential to improve how the government operates and how it delivers services to the public. Each federal agency should develop a clear, time-bound plan for how it will use AI to improve operations&#8212;both by enhancing impact and lowering costs&#8212;while maintaining public trust. </p><p>As part of this plan, agencies should conduct regular assessments of their workflows to identify where AI can automate routine tasks, improve analysis of large datasets, and support better decision-making. In some cases, agencies may need to procure AI tools to assist them with these modernization efforts. <a href="https://a16z.com/defense-reform/">Any procurement process </a>should be designed so that it is accessible to Little Tech, and should not prohibit the acquisition of open source tools where appropriate. </p><p>Agencies should also implement pilot projects that allow them to test and evaluate AI tools in specific functions before deploying them at scale. These pilots should include clear metrics for evaluating impact. Where appropriate, agencies should consult with external experts on design, implementation, and evaluation of these pilot projects.  </p><p>Any internal government use of AI should adhere to usage policies promulgated by the Office of Management and Budget. These policies should be updated regularly to reflect lessons learned from pilots, agency implementations, and evolving technical and legal standards.</p><h2>A call to action: Congress should enact federal AI legislation</h2><p>The time for congressional action is now. Millions of Americans use AI regularly, and there is an increasingly broad consensus that this technology has the power to benefit our economy and society. We know the U.S. must win the global AI race. Americans want their representatives to act to create a safe, thriving market, one that positions America to lead the world in AI.</p><p>Inaction poses other risks. Staying on our current path will produce AI markets that are less competitive and more concentrated, and will therefore compel people to use AI products that are worse and less innovative. </p><p>Congress doesn&#8217;t have to decide between protecting people and protecting competition. With the right priorities and policies in place, it can do both: create a comprehensive framework to protect kids and adults from the harms of AI, while keeping the door open for new entrants to build, innovate, and succeed.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[Beyond Preemption: Lessons from the 1996 Telecom Act]]></title><description><![CDATA[Watch now | Adam Thierer and Blair Levin share their hard-earned lessons from this landmark tech legislation]]></description><link>https://a16zpolicy.substack.com/p/beyond-preemption-lessons-from-the</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/beyond-preemption-lessons-from-the</guid><dc:creator><![CDATA[Matt Perault]]></dc:creator><pubDate>Fri, 12 Dec 2025 17:02:24 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181306223/5faa026dfd18014d3ae1f9f093d1174c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>If you squint at today&#8217;s AI policy debates, you may see the <a href="https://www.fcc.gov/general/telecommunications-act-1996">Telecommunications Act of 1996</a> in the distance.</p><p>In this conversation, <a href="https://x.com/MattPerault">Matt Perault</a>, head of AI policy, a16z, sits down with <a href="https://www.rstreet.org/people/adam-thierer/">Adam Thierer</a>, resident senior fellow, technology and innovation, R Street Institute, and <a href="https://www.csis.org/people/blair-levin">Blair Levin</a>, policy analyst, New Street Research and non-resident senior associate, Center for Strategic and International Studies, to revisit their first-hand experience tackling a similarly significant moment in tech policy: a small number of incumbents with entrenched market power, a messy patchwork of federal and local rules, and misaligned governing authority. The result then was federal preemption coupled with a comprehensive national framework for telecommunications&#8212;all through a bipartisan deal.</p><p>A few themes from their discussion stand out:</p><blockquote><h3>Preemption was a tool for achieving a common goal</h3></blockquote><p>In the 90s, Congress wasn&#8217;t debating preemption in the abstract. They were trying to solve a concrete problem: how to open up competition in telecom markets that had been shaped for decades by monopoly and regulation.</p><p>As Blair explains, the &#8220;big bargain&#8221; of the Telecom Act allowed local phone companies (the Baby Bells) to enter new lines of business like long distance and video. In exchange, they had to open their last-mile networks to new entrants on fair terms. Federal rules were important to prevent state regulators from simply protecting the incumbents they were closest to.</p><p>Adam points to Section 253, which explicitly limited state and local barriers to providing telecom services, as one of several provisions aimed at creating &#8220;a more unified, clear-cut national framework.&#8221; Preemption here was in service of a larger goal: making sure new entrants and disruptors could compete in order to provide more choice and better experiences for customers.</p><blockquote><h3>Comprehensive doesn&#8217;t have to mean clean</h3></blockquote><p>Judging by their war stories, no one at the table has any romantic illusions about the Telecom Act. Blair quotes Justice Scalia&#8217;s comment that the law was &#8220;a model of ambiguity,&#8221; and notes the 110 rulemakings that followed as the FCC and courts wrestled with text shaped by legislative horse-trading.</p><p>The act mixed strong assertions of federal authority with carve-outs preserving state and local powers&#8211;sometimes in the same section. Adam recalls working on language to open wireless and satellite markets, only to see parallel clauses preserving local control over tower siting and other details. That tension fueled years of litigation and FCC proceedings.</p><p>Yet both Adam and Blair argue that, on balance, the national framework worked, not because it resolved every issue cleanly, but because it created the conditions for new markets to form. </p><blockquote><h3>Leadership and vision matter</h3></blockquote><p>If there&#8217;s a through-line from the Telecom Act to today&#8217;s AI moment, it&#8217;s that major technology frameworks don&#8217;t just materialize, they happen when our country&#8217;s leaders come together around a common goal and vision for our nation, and are willing to build it. </p><p>In the 1990s, that leadership took multiple forms: a bipartisan belief that innovation was good for Americans; a willingness to negotiate across industry lines; and a White House that articulated a clear, pro-growth, pro-innovation framework for US technology leadership. Rather than use the same analog rulebook for governing legacy networks, the tech policy of that era helped to define the open environment from which internet companies grew.</p><p>Today&#8217;s AI debate sits in a similar moment of possibility. As Adam describes it, this is a &#8220;constitutional moment for technology,&#8221; where the U.S. must choose whether AI will develop under a unified national framework or splinter and stall through 50 competing regulatory regimes. Blair underscores the stakes from a different angle: public skepticism is rising, institutional capacity is thin, and Congress has not yet stepped into the role only it can play.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;90176534-5824-4963-bf7f-21fd81dd9c91&quot;,&quot;duration&quot;:null}"></div><p></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Blair Levin (00:00)</p><p>So the big idea was competition, but it required a certain kind of regulation to make sure that competition was fair.</p><p>Our biggest fear was the Bells would just dominate everything. And that would not lead to the competition we wanted. And so we kind of bent over backwards to interpret the law. We were usually upheld to enable the disruptors, the new entrants, to be able to make a play for local voice.</p><p>Adam Thierer (00:27)</p><p>When we passed the Telecom Act, it was a new day and we basically started shifting more the focus to this being a national concern and a national marketplace. And we got that right. And I talk a lot about innovation culture and about Congress and the White House sending a clear signal on certain key technologies and sectors for the American economy and for American strength. And we did that finally. We said, no, this is a national matter. We need that for AI too.</p><p>This is what I&#8217;ll call a constitutional moment for technology. Like what will our new innovation policy be for the next 30 years?</p><p>Matt Perault (01:07)</p><p>We are in the middle of a discussion of preemption and like many discussions in policy and politics, the feeling of it is if this is the first time that Congress, the federal government has thought about what the relationship should be between an important topic in tech policy between the federal government and states. Blair and Adam, you lived through a really important debate where Congress and the states were thinking about how to balance affirmative governance and also outlining clear guardrails between what the federal government should be doing and what state governments should be doing. So you are the perfect two people to be joining us today to talk about the history of preemption and important tech and telecom policy debates. Thanks so much for joining us.</p><p>Blair Levin (01:46)</p><p>Thank you.</p><p>Adam Thierer (01:47)</p><p>Thanks for having us.</p><p>Matt Perault (01:48)</p><p>Blair, maybe you could start if that&#8217;s okay. Could you tell us a little bit in the mid 90s as the Telecom Act was coming together, what was the project? what was Congress and the administration, what were they trying to achieve? And then how did preemption figure into that conversation?</p><p>Blair Levin (02:05)</p><p>So it really started quite a bit before the bill passed in 96. As soon as the breakup of AT &amp;T ended, that is to say the judge says, okay, here&#8217;s the consent decree. You have your local &#8275; carriers, what were known as the baby bells. You have the long distance companies. The baby bells who were prohibited from entering certain lines of business were pressuring Congress to say, let us do various things. And the momentum for that grew. There were hearings in the 80s. There were lots of hearings in the early 90s. The bill almost passed in 94 and then went to 96. And the big idea behind it was the local phone companies could enter long distance, they could enter video. We would no longer have siloed, protected businesses. But in exchange for that, the local companies had to provide on some basis, and that was a big debatable point, on some basis, they had to provide access to the last mile network to long distance carriers and to others. And so that was the big bargain. And the fear was that the states would be very friendly to the incumbent local exchange carrier, the ILECs the Baby Bells, because they had a lot of employment, they had a lot of political power with the state utilities.</p><p>And so the idea was you give the federal government, particularly the FCC, the authority to set the terms and conditions by which the new entrants to voice could have, particularly local voice, could have access to certain networks, facilities, in exchange for the local guys then being able to enter any other kind of market. So the big idea was competition, but it required a certain kind of regulation to make sure that competition was fair. At the heart of it was the so-called 14-point checklist, which said basically the local guys had to complete 14 things, all of which were controversial, before they could enter a long distance. And that was at the core of it.</p><p>There were a lot of other things in the telecom act, broadcasts and cable and a bunch of other stuff. Adam, did I get that right?</p><p>Adam Thierer (04:26)</p><p>Yeah, no, that&#8217;s all correct. You know, the amazing thing about this history, first of all, let&#8217;s remember what the history is, because a lot of people today haven&#8217;t lived it. They didn&#8217;t grow up in the age of rotary dial telephones and simple broadcasts from radio and television operators that you could count on your hand and how many you had in your community. You know, the history of American telecommunications and media policy has not been a pretty one.</p><p>For the most part of the past century, a convoluted thicket of federal, state, and local rules and policies controlled information and communications technologies through a variety of clunky mechanisms meant to ensure the public interest was served. All too often, unfortunately, it was not served very well. We had essentially regulated geographic monopolies in telephony, cable, broadcasting, not due to natural technology circumstances or market circumstances.</p><p>But mostly to policy, in my opinion. We had operational licensing and permits. We had line of business restrictions that created direct barriers to entry and exit from markets that limited innovation. We had price controls, rate of return regulation, subsidies that distorted markets in strange, bizarre ways. We had technical device and equipment regulations and quality of service requirements.</p><p>And this wasn&#8217;t just the Federal Communications Commission doing all this enforcing. A lot of it was done, as Blair noted, by public utility commissions at the state level. And then there was still more meddling even at the municipal level in some cases. And on top of all of this economic and technocratic regulation, there was a whole bunch of speech regulation too. We can&#8217;t forget about that.</p><p>But as Blair noted, the thing that really set the wheels and notion for federal reform was the fact that following the 1982 consent decree that the Department of Justice struck with AT&amp;T, the so-called modification of final judgment, a lot of authority passed to the courts and really to one man, a man by the name of Judge Harold Green, to basically be the enforcer of this consent, this antitrust consent decree. And it had sort of removed from Congress and the FCC and even the PUCs a lot of authority.</p><p>Well, nothing wakes up members of Congress like a good old fashioned turf war. And like a question of they&#8217;re like, who runs the show? You know, who&#8217;s in control here? And so I think it was this, Blair, you can correct me on this. I believe it was as early as 1988, we saw the first bill dropped. And I believe it was by Senator Bob Dole that basically would have done something as simple as just say, we&#8217;re going to take the power that we ceded to, you know, the DOJ to do this and MFJ and Judge Green, we&#8217;re going to pull it back into Congress.</p><p>And so it started with the simple idea that Congress should have more say about the state of competition policy in the telecommunications market. That led to numerous debates over numerous sessions of Congress where that bill or that simple idea started growing and growing. And it went from just a few pages to dozens to by the time of the Telecom Act, hundreds of pages and a lot more technical, technocratic deregulation as I&#8217;m going to put in air quotes in our conversation because a lot of people think we massively deregulated the Telecom Act, which is just not true.</p><p>Blair Levin (07:39)</p><p>We had 110 rule makings that we had to do that&#8217;s not exactly deregulation. And I would just note that, yeah, it is kind of an ugly path, but as Shakespeare, I believe, wrote, the path of true love never did run true. It&#8217;s always messy</p><p>Matt Perault (07:41)</p><p>Yeah, well, in both of you, I think have a path of true love with this set of issues. So I&#8217;m curious, what was the vision that the federal government, the Congress had for what would be federal and then what was left to states? And how would you characterize that balance? Adam, what&#8217;s kind of like your top line take on like how you would think about what Congress is doing and what was left to states?</p><p>Adam Thierer (08:16)</p><p>Yeah, well, Congress came right out in the Telecom Act and said, and all the floor talk about this, even from people who are on opposite sides of it, pointed out the need for some sort of a national framework of a more unified approach to ensuring that telecommunications and media markets became more competitive. There was a legitimate desire by all to ensure that result, but it was just a question of the means to getting there. But they made it very clear that, and in fact, maybe I&#8217;m jumping ahead in conversation, but what inevitably went into the Telecom Act was language in section 253 that said no state or local statute of regulation may prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunication service.</p><p>And there were several other provisions in there trying to figure out how to create a more unified, clear-cut national framework, or at least clean up some of those old messes of the past and limit some of those state powers, although not completely, we can get into the details here, because there was a balance. And Congress, in one breath with Section 253 said that, but then in the very next clause said something about basically other exemptions to it. And a variety of other provisions came into the bill over time that I&#8217;ll hand it to Blair to go into more detail.</p><p>Blair Levin (09:32)</p><p>I think the details, there&#8217;s a million details. And of course, there was a big Supreme Court case on this at which Justice Scalia wisely wrote, the act is a model of ambiguity, which was confirmed by a senator who said to Reid Hunt, the chair of the FCC, Reid, here&#8217;s what we did. We put in everything that one side wanted, and then we put in everything the other side wanted.</p><p>Like I said, there was this 14-point checklist and there were eight terms and conditions. There were a thousand different things. And a lot of the debate was, what was it that the FCC could preempt? We were basically on the side of preempting states a lot because what we saw was, look, whenever you do these things, you&#8217;re going to make a mistake one way or the other. And you can argue we made certain mistakes.</p><p>Our biggest fear was that after all of this work and after so many thousands of hours on Congress, that the real fear was that the Bells would just dominate everything. And that would not lead to the competition we wanted. And so we kind of bent over backwards to interpret the law&#8230;we were usually upheld&#8230; to enable the disruptors, the new entrants, which included the long distance industry to be able to make a play for local voice. I think as history went on, the markets changed and it didn&#8217;t really matter that much in some ways. We could chat about that. But our basic point was where there was ambiguity to interpret it to enable that new investment, that new innovation.</p><p>Matt Perault (11:09)</p><p>Blair, in your work for New Street Research, which is a Wall Street research firm, you produce regular research for them. You&#8217;ve often gone back to the value of ambiguity in terms of policy achievements, like the political value for bringing people to the table. Blair, and Adam look at this from different political perspectives.</p><p>I think one of the miraculous things about the Telecom Act is just that it exists, that it was able to get through the political process. I think sometimes when we look at current issues in AI policy, it seems like that would be incredibly challenging.</p><p>So I&#8217;m curious about the political mechanics. Like how did this get to done? What can we learn from that and thinking about AI policy and then Blair, you&#8217;ve done so much thinking about specificity and ambiguity, like how does that play into the achievement of the Telecom Act?</p><p>Blair Levin (11:58)</p><p>It&#8217;s a great question. Look, there&#8217;s a lot of credit that goes to a lot of different people and I won&#8217;t name all of them, but really on both sides of the aisle. I will, Bob Dole, for example, was a real terrific leader in the Senate. Tom Bliley, the former mayor of Richmond, the Republican head of the committee, and Ed Markey, who we relied on a lot, Fritz Hollings. It was a very different Congress. They held a bazillion hearings on this stuff. They worked on all of these different elements. But I&#8217;ll tell you what I think is really different. I&#8217;ve had a lot of conversations with various Democrats about how did the Democrats lose the tech industry? I actually disagree with almost every bit of analysis I see. But I also don&#8217;t think it&#8217;s that relevant. Here&#8217;s the big question. How did the tech industry lose the public?</p><p>When we were doing this act, I think by an order of magnitude, 75% of the public was excited about the opportunity to use the internet better, about what technology could bring. You look at the polls now, it&#8217;s exactly the opposite. It&#8217;s like 25 % think that AI is positive. We could go into why the public believes that. It&#8217;s not what I believe necessarily, but having said that, being pro-tech, being pro-innovation is a much harder thing to do politically. Now I actually think the tech industry in the last year has undercut their own case in a variety of ways, we could go into that. But I think that&#8217;s part of the problem, the tech industry is not trying to persuade people in the way that I think the long distance guys, the local guys, the cable guys, everybody in Washington in that run up to the act actually tried to do a lot of persuasion to meet various folks.</p><p>By the way, you know, as Adam will recall, Reid and I met with Adam, who was very much on the other side. I used to go to meetings with the Progress and Freedom Foundation. I&#8217;ve never talked about this publicly. They had a big influence on us. We didn&#8217;t usually adopt what they said, but they kept us from doing things that were a lot stupider than we were thinking about doing. And I just don&#8217;t think that process exists today in the same way. There&#8217;s not the same good faith.</p><p>Adam Thierer (14:08)</p><p>You know, of the interesting things about the Telecom Act era, as Blair alluded to, I mean, the politics were quite different. There were very strange bedfellow alliances that went on behind the scenes. But a lot of it was defined a lot by where you sat as a congressman in this world in terms of who you were aligned with in industry. There were like, are you a long distance man? Are you a Bell man? Are you wireless guy?</p><p>And it was very strange because you had Republicans who are going against each other, each other&#8217;s throats, because one would be more affiliated with long distance thinking versus local RBOC regional bell operating company thinking. So that was different. There were other things that were going on behind the scenes. There were a ton of different special interests involved in shaping the Telecom Act and getting very strange things inserted line by line that are relevant to the preemption stuff.</p><p>Adam Thierer (14:57)</p><p>Because as I said, with each line that says we need to have a national framework and ensure greater competition, we&#8217;d have another line that undercut it by saying, but we want to make sure that state and localities can do all of these other things. This was true for things like wireless markets, where we were trying to open those up at that time. People forget we didn&#8217;t have nationwide spectrum and wireless markets, but we were starting to for that and for direct broadcast satellite. And we tried in the Telecom Act to negotiate some language about that.</p><p>I remember working on it very directly with Senator Pressler&#8217;s office and Senator McCain and others to try to get more amendments on that. Each time we did, there was an effort by the other side to come in and say, no, we need to have this preservation of state authority. It ended up being here. I&#8217;ve got it here, section 704 of the telecom act. Basically, after saying we&#8217;re going to open up the market, said nothing shall limit the authority of state or local government regarding the placement, construction, or modification of personal wireless facilities.</p><p>And so you had this immediate tension, right? We&#8217;re going to have more competition. We&#8217;re also going to have more local regulation and NIMBYism. And this pervaded a lot of what was in the Telecom Act. And we lived with that for many years after. Like going back and forth, were numerous FCC proceedings about this and cases about small cell orders and 5G siting issues, Muni broadband.</p><p>And, you know, back in the day, I was much more in favor of like striking those compromises and a lot more sympathy for a lot of those state and local governments than I do today because I now see the mess that it created and that it really did undermine a lot of important innovation and competition and led to like only certain larger players being able to benefit from from that market.</p><p>And the areas where we did not touch in the Telecom Act were the oldest and the newest of the information and communications technology world. We didn&#8217;t touch publishing newspapers and books. We never really did with federal regulation. And we didn&#8217;t touch computing for the most part. We left those alone. The oldest and the newest were essentially born free. But everybody else was born into a different type of regulatory arrangement. And we tried to optimize for the suboptimal by saying, well, let&#8217;s balance here. Let&#8217;s try it with a Goldilocks formula. But where we left things alone, I would argue, we actually got the best results.</p><p>And I would include in that, to some extent, the wireless market, which at the time, Blair and the FCC did wonderful work in helping open that market. Originally, we had beginnings of regionalized monopolies there. We were going to give local carriers a half monopoly on the market and then one other carrier. And then we opened it wide up. started auctioning spectrum. So where we allowed markets to work and technology to work its magic, things worked out best in my opinion. And where we didn&#8217;t and micromanaged it, we&#8217;re still today fighting over those issues.</p><p>Blair Levin (17:39)</p><p>Yeah. Can I just say, I agree with all of that, but I have to add one important point because there were two things that we did that history has completely ignored. And I would argue that may have been the most important things. And they both go to the same issue of what can the incumbent local phone company charge and what&#8217;s called terminating access charges. So whenever you connect to another, to that local carrier, they would charge you. And that this was part of long distance, that&#8217;s why long distance cost what it did, et cetera. As to wireless, obviously in the early 90s, when very few people had wireless phones, most calls had to be connected to the wired network. We basically said, you can&#8217;t really charge the wireless guys. It was only after we did that, that AT&amp;T started what was called the OneRate and it started to be a mass consumer product. We did the same thing with the internet.</p><p>We didn&#8217;t regulate it, but we regulated on behalf of it. A very big issue was could the local, this was the only access to the internet was dial up in those days, could the local exchange carrier charge on a permanent basis, whatever they wanted? And we said, nah, you can&#8217;t do that. And we might&#8217;ve had, but the Bells pushed various members of Congress to require us to do that.</p><p>We might&#8217;ve had something to do with getting Steve Case, who ran AOL, to start a letter writing campaign. I think there were 400,000 emails, which seemed like a lot to Congress, and gave us the kind of political window to be able to say to the Bells, no, you can&#8217;t charge terminating access. And by virtue of that, internet service providers charged a set rate. You could use it as much as you wanted. I think those two things were really critical to the growth of the internet and to wireless. But it required essentially saying, we&#8217;ve got this existing monopoly that we have to make sure doesn&#8217;t carry it over to the next thing.</p><p>Matt Perault (19:41)</p><p>I think anything that&#8217;s trying to govern this comprehensively would have a governance component and then something that outlines the scope of what the federal government should do and then some carve outs to that for what is going to be left to the states. And there&#8217;s like this balancing between those three things, like how much are we governing or choosing not to govern, choosing to just leave decisions to the market or leave governance to the market.</p><p>What is the scope of what we&#8217;re trying to do at the federal level and how large or small should that be? And then what is the nature of the carve outs for states where we would say states can continue to act. It seems like in each of those, there was a set of compromises.</p><p>Adam, the way I hear you saying it is less like we got to the Goldilocks was, was a bug, not a feature. The nature of trying to get everyone, the compromises to please everyone got us further from something that was optimal. But it does seem at least from the outside and someone like from my perspective, I didn&#8217;t live this, that you get concepts that might not have been thought of as bipartisan to eventually become bipartisan and get us to a point when there&#8217;s actually some form of governance. I&#8217;m curious how you see it. How did you, for each of those three buckets, how do you get enough Democrats and enough Republicans on board to say, we&#8217;re okay with this moving forward?</p><p>Adam Thierer (21:01)</p><p>What are we talking about on the telecom act or AI policy? Those are two very different things.</p><p>Matt Perault (21:04)</p><p>I guess both. I think the question, the focus for us, I think, is like, what can we learn from the Telecom Act for AI policy?</p><p>Adam Thierer (21:13)</p><p>Yeah, I think it has some application. And again, I&#8217;ll go back to what I already said, that where we were clearest and where we had, I think, the most sweep in terms of offering a more serious national framework, had the most effectiveness. And where we had the most sort technocratic micromanagement, specifically with local telephony, we spent years in courts and wrangling in endless filings to the FCC and to the courts about how to allow for anything to happen at both the federal and the state level.</p><p>But with wireless, for example, and I&#8217;ll just give the particular example, direct broadcast satellite, which is already a technology that&#8217;s almost come and gone in this world in some ways, but we still have DBS technology out there. But the reality is, that that was a market that was just developing then. And we had to make sure that we essentially protected it as it came into being from a lot of state and local meddling and taxation. And so there were provisions that tried to deal with that new, exciting, new form of competition to cable television and basically allowed that to develop. And it was really crude. We basically, I remember sitting around in debates in the Senate trying to figure out, how do we do this, but also give the state some balance. And there was a lot of discussion about how to essentially just have almost like a pizza dish size rule that says if it&#8217;s as big as a pizza or a bit smaller, it&#8217;s gonna be national. If it&#8217;s bigger than that, let the states figure it out.</p><p>You know, these things are messy. They&#8217;re not easily done. But I think that where we did have the clearest rules and the more national framework and focus, especially with the internet, things turned out best in my opinion.</p><p>Blair Levin (22:49)</p><p>Yeah, it&#8217;s a great example, but if I could illustrate another aspect of it, the DBS really started to grow once the cable access rule, the program access rules of the cable act were passed. And what those rules said was that the cable networks, which were largely owned in part by the cable networks, I just said the cable channels were owned by the cable networks. So John Malone of TCI owned BET and CNN and Turner, and he prohibited them from selling to DBS.</p><p>Once you had the program access rule saying you have to sell on equal terms and conditions, DBS rose. That was great in terms of multi-channel video competition. But the big unknown story about that was it caused Malone and the cable industry to go, holy shit. These guys have a better cost structure and a better regulatory structure, 10 years from now, they&#8217;re going to totally beat us because now we don&#8217;t have them. We don&#8217;t have an advantage. And therefore we have to figure out another business. And that&#8217;s what got them into broadband. And that&#8217;s what got into the two wire of the competition. Look, the telephone guys had developed DSL technology in the 80s, but they didn&#8217;t want to use it for the internet because it was so profitable to sell dial up lines and sell two lines and all that.</p><p>And it was not the intent of Program Access Rules to create a new broadband competitor, but it happened. And I think it was one of the great accidents of policy in doing that. But the other more direct answer to your question, Matt, is I hate to sound like an old guy, though I am an old guy, but there was just an element of leadership. Like we&#8217;re here to do something.</p><p>And so you had Clinton and Gingrich and Gore and the other people that I&#8217;ve mentioned saying, we have to fix this problem, but it&#8217;s going to be great for America. Right now, have there been any hearings? Have there been any negotiations? It&#8217;s bipartisan, but really kind of going the other way. There just isn&#8217;t the kind of leadership that thinks it has to persuade people, that thinks it has to compromise. If you don&#8217;t have that, sure, you can do an executive order. The executive order, I&#8217;m not sure, has any legal standing. But there was a style of leadership that understood this. I mean, it was really interesting watching Clinton and Gingrich communicate with each other. And they&#8217;re both kind of Southern good old boys and they knew how to talk to each other, but they wanted to do things as opposed to simply command things. And I just, don&#8217;t, I don&#8217;t see that happening on the Hill though. Adam, you&#8217;re on the Hill a lot more than I am.</p><p>Matt Perault (25:35)</p><p>I think it&#8217;s striking the arc that you&#8217;re describing is actually often what I hear from Adam in the stuff he writes and what he says where he says, there&#8217;s a problem. We see X number of state bills. We need to solve it through leadership and a vision for what technology governance should be for America.</p><p>And that will benefit Americans in all these different ways, economic and social value. So Adam, is that right? And if that&#8217;s what you&#8217;re trying to say, and let&#8217;s say Blair is also correct that we&#8217;re not seeing that reflected, at least in how the policy process has been able to move forward, what do think the gap is?</p><p>Adam Thierer (26:09)</p><p>Yeah, okay. So first of all, let&#8217;s step back just a little bit and go back to the Telecom Act and what it did and did not do with regards to the internet and the world that we are living in today, because this is an important story. And this is a story that&#8217;s being changed as we speak. But in the 1990s, the internet was still very much an nascent technology and what we used to clunkily call the electronic super highway and electronic commerce was slowly building.</p><p>The Telecom Act largely forgot about the internet. To the extent it talked about it, it was to regulate decency online, right? We had the Communications Decency Act, which was later struck down in the courts. But Blair used the term accidents of history. Well, we had one of the most momentous accidents of history in this country when the Communications Decency Act was struck down, but not included in that strike down was Section 230, right?</p><p>And that really became the basis for the explosion of electronic commerce that followed. Likewise, the president had a vision, President Clinton had a vision that was articulated in the framework for global electronic commerce about the internet being a growth engine. And he predicted that we would see a massive explosion in commerce because we were going to treat the internet differently and not trying to pigeonhole it into yesterday&#8217;s analog age rule book.</p><p>And so this really became a crucial part of the story in terms of how the internet unfolded during this period. And really important to this story, especially for the preemption question, is the fact that the states were very much in the backseat on this. In fact, they weren&#8217;t even in the car. When it comes to the regulation of the internet, we did not have 50 different state internet bureaus, like we had 50 public utility commissions. We did not have comprehensive regulation of computing at the state level.</p><p>And so that became the new baseline, that and the fact that the president had established this new innovation culture with the global electronic commerce, the framework for global electronic commerce that basically treated it as the principal said, a global resource and platform. And that talked about policy should be facilitated on a global or national basis and avoiding convoluted borders and barriers wherever possible, learning from the past. That was then, this is now.</p><p>And to get back to your question, Matt, now is a world where the states have not only caught up, they&#8217;re in the driver&#8217;s seat. They&#8217;re completely controlling technology policy in the world of advanced technology, the internet, and now AI, with 1,000 plus bills &#8275; moving across America and all sorts of different types of flavors of regulation being proposed. This is what I&#8217;ll call a, sort of from a policy perspective, a constitutional moment for technology.</p><p>Like what will our new innovation policy be for the next 30 years? Congress and the Clinton administration gave us one between 96 and 98 that I think worked marvelously. But we&#8217;re in the process of now upending it and going with totally different route. And that&#8217;s why we have the discussion about preemption of AI at the federal level happening right now.</p><p>Blair Levin (29:11)</p><p>Yeah, look, I largely agree. But I guess we should state the most obvious point. There was preemption in 96 because Congress wrote it. And they wrote the law and said, here&#8217;s the way we&#8217;re going to do this. Now what Congress seems to be saying is, we&#8217;re not going to say anything, but we&#8217;re not going to let you say anything either. And I don&#8217;t think that&#8217;s really the thing.</p><p>That reminds me of that old scene from Duck Soup with Groucho Marks. He&#8217;s running the parliament meeting and some guy raises something. He says, you can&#8217;t bring that up. talking about old business and that&#8217;s new business. The guy says, okay, all right. Any other old business? None okay. Any new business? The guy raises his hand again and Groucho Marx says, no, no, you raised that before. It&#8217;s now old business.</p><p>You know, or by the way, it&#8217;s also like what the FCC did under a [G pie] to say we have no authority to regulate broadband, except the authority to tell states that they can&#8217;t. This was all in the context of net neutrality And the court said, no, you can&#8217;t do that.</p><p>Adam Thierer (30:13)</p><p>This is where there&#8217;s a serious policy disagreement between Blair and myself, because I Congress can absolutely do that.</p><p>Blair Levin (30:19)</p><p>Well, Congress can, but the FCC can&#8217;t.</p><p>Adam Thierer (30:20)</p><p>But Congress can declare its intent to essentially keep the field clean and essentially preemptively deregulate and create a policy firewall, you know, between old rules and regulations and new ones. And, you know, I agree with Blair, Congress needs to act on AI policy and they can learn some things from the Telecom Act era. But I will agree also with Blair that the policies, the politicians have grown more hardened. The issues are more R and D than they ever have been in my lifetime in terms of how tech policy plays out. I don&#8217;t think there&#8217;s a single Democrat in Congress today that would probably vote for any sort of preemption on AI policy because they like what&#8217;s happening at the state level.</p><p>Blair Levin (31:00)</p><p>You know, one thing you learn at the FCC is it&#8217;s all about the kids because the most powerful argument is always that if the kids have access to this, it&#8217;s bad for them. But there&#8217;s no doubt that there&#8217;s a lot of stuff that can happen on AI that is not good for kids.</p><p>Earlier, I mentioned that one of the major differences between now and when I was at the FCC in 96, but also when I did the national broadband plan in 2009 and 10, is there&#8217;s a real negative view, which really about technology that I think started in serious form in about 2012. And Jonathan Haidt, the professor at NYU has done a lot of work on this. And you see this with states on a very bipartisan basis saying, we need to get cell phones out of schools. That kind of thing. Well, there is an understanding that there is a negative side to technology.</p><p>Of course, you always understand there&#8217;s always trade-offs. Nothing is perfect. But I don&#8217;t think congressional leadership has risen to the occasion of trying to address this in any kind of way on a bipartisan basis. It&#8217;s a lot harder for the reasons Adam mentioned earlier. The partisanship is much tougher. But at the end of the day, there&#8217;s something weird going on in Washington where Congress is just, other than passing one piece of legislation, what are they doing about anything?</p><p>Adam Thierer (32:23)</p><p>Well, there&#8217;s a lot to unpack there. I&#8217;ll just say this, like, look, Congress is sort of fundamentally dysfunctional on multiple levels right now. There&#8217;s a lot more going on here than just technology policy dynamics. It&#8217;s just really, really hard to get a lot of things through Congress these days and bickering, partisanship is part of it. But also these issues have become more complicated than ever and more numerous than ever.</p><p>One of the interesting things about the old days is I could sit around as an analyst at the Heritage Foundation where I spent 10 years covering these issues in the 90s, and I could really cover almost everything myself. I really had a really good feel for what was going on in the world of telecom law and policy. I absolutely could not do that anymore. I absolutely have to get more siloed and focused. And members of Congress and their staff, their very young staff, are confronted with a knowledge deficit about the unbiased.</p><p>[You have an] unbelievable explosion of new technologies, sectors, opportunities, and potential threats. And they&#8217;re trying to figure out how to handle it all preemptively. Like, how do we get this all handled right now? And I disagree a little bit that, Congress, is not caring about this. I mean, there&#8217;s a hearing in the Energy and Commerce Committee where 19 different child safety bills were being considered for AI policy and internet policy.</p><p>A lot of it is just how you get those things over the finish line and how you also overcome constitutional objections and other barriers to these things. But there&#8217;s a lot of interest in doing things. It&#8217;s just hard to get them done.</p><p>I think the other problem here is just the way that the legislative clock takes seemingly faster and faster makes it so that you have to come up with quick fixes and gimmicks compared to previous sessions. I mean, we used to work on things over a couple of sessions of Congress and then they finally come to fruition.</p><p>Now it just seems like we move our way to get to something like on driverless cars or privacy or whatever, and then it all just peters out, implodes, and then the next issue hits you in the face. So the quick ticking clock and the fact that the issue set has exploded and become so much more complicated, weighs against the fact that Congress is gonna be able to be a major actor on technology policy as they were in the past, leaving it to the sporadic activities of a lot of different states and localities and international governments to sort of fill in gaps here and there, but not in a consistent coherent fashion, which is the patchwork problem that we&#8217;re facing today.</p><p>Matt Perault (34:39)</p><p>So we&#8217;re talking a lot about the Telecom Act and all the things that worked well. Adam, I know that you pointed to some of the things that were more challenging about it. What are, can we go into a little more detail? Like what are the kind of mistakes that were made or what are the things that if you could go back and do them again, you would do them differently?</p><p>Blair Levin (34:57)</p><p>I&#8217;m going to be like Donald Trump and say that everything we did was perfect.</p><p>It&#8217;s a much longer conversation. I would say for the most part, we got a lot done. We got it mostly right. I would say that ironically, the things that really worked well, the core bargain was that long distance local thing. But at the end of the day, what mattered was the cable industry developed its own broadband. We essentially destroyed the voice market. What are you paying for long distance these days? Nothing.</p><p>Because the internet replaced everything and by the way Al Gore and Reid Hunt understood that in an analog world you could have these silos of voice video and data but in a digital world they&#8217;d all come everything just becomes data what we&#8217;re doing right now. I mean I went to the 1964 World&#8217;s Fair in New York where it was a video phone that was like three bucks a minute.</p><p>And of course, no one adopted it because like three bucks was really a lot of money in those days. But we&#8217;re doing this essentially for free.</p><p>There are so many things that actually worked out well, but not necessarily because of the way we did them. But it was things like the program access rules that forced cable in. It was like getting rid of the terminating access charges for wireless. So it could grow. It was spectrum policy. There were a lot of different things.</p><p>One of the advantages that we had was we had an agency, the FCC, that had expertise. And one of the most important things about expertise is to understand when you&#8217;re wrong and course correct. Because the truth is you&#8217;re never gonna get it 100% right. And there are gonna be these court proceedings and other things. So part of the problem we have now is, and by the way, we spent a lot of time in the background, talking to members of Congress and saying, if you write it this way, we have this problem. If you write it that way, it&#8217;s a little bit better. Who&#8217;s the expert agency on AI? Where is there a locus of people who aren&#8217;t looking at this from a partisan basis, but just want to see America succeed with AI? It doesn&#8217;t really exist in the government. Maybe it&#8217;s a little bit at the FTC, but I don&#8217;t think so. And that&#8217;s what I think we have an institutional capacity problem.</p><p>Adam Thierer (37:16)</p><p>So let me briefly touch upon where I think the telecom succeeded, because we had a hundred years worth of really misguided policies and specifically a lot of misguided sort of geographical constraints and regional monopolies and everything.</p><p>When we passed the Telecom Act, it was a new day and we basically started shifting more the focus to this being a national concern and a national marketplace. And we got that right. And I talk a lot in my work in my books about innovation culture and about like Congress and the White House sending a clear signal on certain key technologies and sectors for the American economy and for  American strength. And we did that finally. We said, no, this is a national matter. We need that for AI too. We need that kind of a vision.</p><p>And then secondly, I think I&#8217;ll just go back to the fact that there were important sectors that we essentially created a policy firewall for, not perfectly.</p><p>But we basically said that certain things in the world of wireless and the world of hardware and equipment, computation, computing, software, we&#8217;re not completely off limits, but we&#8217;re going to take a lighter touch. It was a new day. We were going to allow those technologies to have a new chance and break away from the analog era mindset and give them a different ability to prosper. I think that was crucial. It wasn&#8217;t perfectly articulated. There isn&#8217;t any one moment in the telecom act where we got this exactly right. And of course, there are other things we haven&#8217;t discussed like the Internet Tax Freedom Act and some other things. There was a bill that was pending in 1997-98 called the Internet Freedom Act, the IFA. Rick White and I forget who else, Billy Tauzin had a bill that basically said what I just said. And then that became what the White House put together in the framework for global electronic commerce. I wish Congress would have better articulated all.</p><p>So it sort of became this accidental fortuitous development of having a national framework, but it clearly got started with the telecom. Congress absolutely has to do that again. We can debate the details, but we just can&#8217;t say, off the wheel, anything goes, patchwork of a thousand plus policies on every single issue under the computational algorithmic sun. That is not gonna work for the United States or its strength and its ability to prosper.</p><p>Matt Perault (39:40)</p><p>So maybe this can be closing portion of the conversation. Adam, given what you just said, what do you think would be, what would be the pillars of what you have in mind? So like if we were thinking of a telecom act essentially for AI, what would the elements of it be?</p><p>Adam Thierer (39:55)</p><p>Well, clear assertion of national authority over certain matters that are clearly interstate algorithmic commerce. We can debate what that phrase means, and I know it&#8217;s controversial, but the reality is Congress has to take more authority over this marketplace. Second of all, I think we need an ongoing standing process in our government to actually try to deal with federal-state relations. This is a hard issue. Blair points out there&#8217;s no institutional capacity.</p><p>I would say actually we do have a lot of agencies. I&#8217;ve advocated some of them go away, but they never do. And the reality is, is not only those, but we&#8217;ve added to it. Joe Biden added an AI Safety Institute, which is now the Center for AI Innovation and Standards.</p><p>Let&#8217;s have a standing body that tries to help figure out standards for federal state matters. If we&#8217;re going to allow the states to be a player on things like AI chat bot rules and algorithmic discrimination and all these other things that I personally prefer comprehensively preempted, we at least need to have consistency in that process. We have to have some sort of a federal overlay that says, here&#8217;s how we make it simpler so that Little Tech and other types of players can thrive and that consumers in the nation can benefit. And so I think that&#8217;s really, really crucial.</p><p>A third component we haven&#8217;t discussed here is the federal government has to play a role because there are also speech related matters in play here. This is about making sure that this is a technology of freedom and a technology that can benefit the public by having a lot of different voices heard.</p><p>I think that&#8217;s something that&#8217;s often overlooked. We kind of got that wrong in the Telecom Act, but look, the court smacked it down. But I would like to see that made a priority as well.</p><p>Matt Perault (41:26)</p><p>Blair, what about from your perspective?</p><p>Blair Levin (41:27)</p><p>Well, from my perspective, Adam has done a lot more work on this recently than I have, but number one, I would say as a political matter, the first thing is to protect the children. I kind of make light of that in some way, but in some way I&#8217;m quite serious about that. Because I think that what we&#8217;ve seen from, and again, Jonathan Haidt has documented this. There are people who argue with him and all that. But certainly if you poll parents and kids, there&#8217;s a lot of negative that&#8217;s coming out of the way the internet is being used. And so there&#8217;s a fear that that negative sensibility, causes of depression and other things, we&#8217;ve got to do something about that. We&#8217;ve got to protect it. I agree with that. And it&#8217;s better to do it on a national level than a state by state level. But when you look at what the state laws are doing, a lot of them go to that. So I think that&#8217;s actually the top priority.</p><p>Then I think there&#8217;s a certain kind of infrastructure need. It could be that capital markets, they seem to be funding all of this. That doesn&#8217;t seem to be a problem. But there are various problems, particularly with the power, the grid, and all that for the data centers. And you see this in the race in New Jersey, where it&#8217;s really a concern to consumers, because they don&#8217;t see they&#8217;re getting the benefit, but they&#8217;re facing higher electrical prices and they think it&#8217;s because of AI. And so you&#8217;ve got to deal. There are some issues related to that.</p><p>But the third thing, and this reflects that I was chief of staff when we were doing this, it&#8217;s about institutional capacity. I&#8217;ll make the following deal with Adam. Let&#8217;s get rid of the FCC, which I think has outlived its usefulness. That&#8217;s a whole other question. Move spectrum over to NTIA, but create a digital regulator. I know Adam would not want to give that power to a federal authority. But I think that the way to do it is to have a federal authority that actually does have the power of regulation, uses it wisely, sometimes they&#8217;re going to and sometimes they&#8217;re not going to. But I think you have to, but I&#8217;d rather have that it&#8217;s very difficult for Congress to write something that&#8217;s going to last more than a year. And that&#8217;s why you have an FTC or an FCC or an SEC so that would be mine, but I suspect that Adam would disagree with that.</p><p>Matt Perault (43:49)</p><p>Adam, what do think?</p><p>Adam Thierer (43:49)</p><p>Yeah, I think we have a lot of institutional capacity already we need to optimize for. And if we&#8217;re going to get rid of a couple of agencies.</p><p>Blair Levin (43:56)</p><p>It&#8217;s out all over the place and it doesn&#8217;t and it also hasn&#8217;t been given a mandate from Congress, right? I mean, Biden created this thing.</p><p>Adam Thierer (44:04)</p><p>Yeah. And there are many bills in Congress that actually would cede new authority to the Department of Commerce and the Center for AIA Standards. And we could have that discussion. I&#8217;m open to that. I have written things for the R Street Institute that basically say, here&#8217;s how you can actually have some federal regulation of some of the concerns that Blair&#8217;s outlined and others we haven&#8217;t discussed in terms of AI safety, transparency, and other things like that.</p><p>We have a lot of regulatory capacity at the federal level, 411 agencies at the federal level. But if you want to add one more, I guess go for it. You know, the reality is, yeah, I just think that, you know, bureaucracy is not always the solution and we already have a lot of capacity. So let&#8217;s optimize for it.</p><p>Blair Levin (44:33</p><p>Well, I&#8217;ll add one and subtract one.</p><p>Bureaucracy isn&#8217;t the answer, but capacity is. And I just think that&#8217;s a big issue here. Again, I could be wrong about this. You&#8217;re involved more with this than I am. But I don&#8217;t see who is helping members of Congress understand what the real problems are.</p><p>There&#8217;s a conservative analyst Yuval Levin, who unfortunately is not related to me because I really like his work, but has talked about how Congress has become formative, performative rather than formative. That is to say, because of the cameras on the hearings, everyone&#8217;s just, you know, performing for TikTok. And I testified a couple of times last year and it was so strikingly different than when I had testified before in terms of nobody was really asking a question to try to learn anything. They were there to make speeches and that&#8217;s just a big problem.</p><p>Matt Perault (45:34)</p><p>Blair, Adam, thanks so much for this great conversation.</p><p>Blair Levin (45:37)</p><p>Thank you and good luck.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[What Counts as an AI Startup? ]]></title><description><![CDATA[Cost and compute thresholds fall short in defining startups; revenue may be a better guide for helping Little Tech compete.]]></description><link>https://a16zpolicy.substack.com/p/what-counts-as-an-ai-startup</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/what-counts-as-an-ai-startup</guid><dc:creator><![CDATA[a16z AI Policy Brief]]></dc:creator><pubDate>Wed, 03 Dec 2025 15:02:52 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179217068/2456a6528e4f7130d0a7f218e0bdd114.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This summer, <a href="https://a16z.com/to-help-ai-startups-compete-use-revenue-based-thresholds/">we wrote</a> about how proposals targeting AI model development&#8212;or, how AI models are built&#8212;would impose steep compliance burdens on startups. As we outlined, imposing complex and costly compliance regimes on Little Tech can hinder US competitiveness, lead to a more concentrated AI market, and result in fewer choices for consumers. </p><p>Lawmakers are largely sympathetic to this concern. They understand the need for the United States to compete and win in AI and generally support small businesses and entrepreneurship. Yet, numerous state AI proposals&#8212;while intended to put safeguards in place for the biggest players&#8212;still risk sweeping in the startups at the forefront of AI innovation. The tools lawmakers reach for to carve out Little Tech, including compute and training-cost thresholds, aren&#8217;t built for the realities of how AI is made today. </p><p>In this conversation, <a href="https://x.com/appenz">Guido Appenzeller</a>, investing partner, and <a href="https://x.com/MattPerault">Matt Perault</a>, head of AI policy at a16z, discuss why thresholds based on either compute power and training costs fail to separate Little Tech from larger developers, and why revenue may be a more effective criteria for establishing what counts as an AI startup.</p><p>Key takeaways from their conversation:</p><blockquote><h3>Compute thresholds age quickly.</h3><p><em>&#8220;If you pass a law today with any number, it&#8217;s just a time bomb.&#8221;</em></p></blockquote><p>Processor speed has accelerated across the entire history of computing. The Intel 404, the very first CPU, could process about 100,000 operations per second. Today, Nvidia&#8217;s H100, one of the most widely used chips in AI development, can do about four quadrillion operations per second. Just 10 years ago, that was approximately 4 trillion operations per second. That&#8217;s a 10-million-fold leap since the 1970s, and a 1,000x jump in just the last decade.</p><blockquote><h3>Training costs don&#8217;t track size</h3><p><em>&#8220;&#8230;you start with an open source model that gets trained for a certain number of operations. [Then] the question is for this derived model, how many training operations went into that? Is that just the operations that I invested? Is that the operations that the original trainer invested? Is it the sum of both operations?...there&#8217;s no clear lines you can draw here about what&#8217;s inside one company versus another company.&#8221;</em></p></blockquote><p>AI development changes rapidly, and today, it is increasingly modular and remixable. Many developers start training their models using open-source models, fine tune it with additional data sets, or combine components built elsewhere. That makes &#8220;total training cost&#8221; nearly impossible to measure and a poor proxy for who is big or small and the associated risks of these models. </p><blockquote><h3>Revenue reflects use and market size</h3><p><em>&#8220;Don&#8217;t regulate the scientists and researchers. Let them innovate...When there&#8217;s actually a product that hits the market and you can talk about use, that&#8217;s usually where regulation makes the most sense.&#8221;</em></p></blockquote><p>Unlike compute power or training cost, revenue measures when a company&#8217;s product actually hits the market and gains traction. It&#8217;s a marker of real-world use and impact, not technical aspiration or achievement. Once a company hits a certain level of success in the market, they likely have the resources to build a legal function or hire outside counsel to help navigate compliance. </p><p>There are some exceptions. Guido and Matt point out that we have seen a company reach $500 million in annual run rate after approximately 15 months and with a small number of employees. According to <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53">California&#8217;s new disclosure law</a>, this would put them in the same compliance category with the likes of Google, Meta, Microsoft, and OpenAI, which are magnitudes larger. This means that some rapidly-growing startups could bear the same compliance burdens as the largest tech companies in the world, which will saddle them with costs that make it harder for them to compete. That&#8217;s an undesirable outcome if we want AI markets to be as innovative and competitive as possible.  </p><blockquote><h3>Regulate harmful use, not size</h3><p><em>&#8220;If we&#8217;re blocking US companies from offering these models, I still can easily obtain them internationally. Bad guys don&#8217;t tend to follow rules.&#8221;</em></p></blockquote><p>Imposing safeguards on developing large AI models might not actually result in the intended outcomes of keeping people safe. Regulating how models are built rather than how they are used could backfire, slowing down US developers while doing little to reduce risk. Powerful open-source models are already readily available abroad, including from China, making it easy for developers to train or fine-tune models outside US jurisdiction. Moreover, any kind of threshold is made to be gamed, with companies potentially finding novel workarounds like outsourcing their training costs to minimize exposure. </p><p>By contrast, a public policy approach that focuses on regulating the uses of AI holds all companies, regardless of size, accountable under the law and can be used to target and deter harmful conduct.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;151f2724-5d7f-422d-842c-7d01464d9f9e&quot;,&quot;duration&quot;:null}"></div><p></p><p><em>This transcript has been edited lightly for readability.</em> </p><p>Guido Appenzeller (00:00)</p><p>So if you pass a law today with any it&#8217;s just a time bomb.</p><p>Matt Perault (00:04)</p><p>Instead of focusing on thresholds that are oriented around inputs, like training cost and compute, we should focus instead on thresholds that are oriented around impacts and what your presence is like in the market.</p><p>Guido Appenzeller (00:13)</p><p>Don&#8217;t regulate the scientists and researchers, right? Let them innovate, let the innovation thrive. When there&#8217;s actually a product that hits the market and you can talk about use, that&#8217;s usually where regulation makes the most sense. </p><p>Matt Perault (00:30)</p><p>All right, so I&#8217;m looking forward to this chat, Guido. We wrote this piece back in late May about the kinds of thresholds that could be used to delineate between startups and larger companies. And maybe I can just go into a little bit of the origin story of this piece that we wrote. </p><p>So I think it came from a conversation that we were having where I was saying to you, we&#8217;re often sitting across the table from lawmakers and lawmakers ask us, they&#8217;ll tell us the bill that we just introduced is not intended to cover small companies. And we will say, we&#8217;re really concerned about the impact on Little Tech And then the policymaker will say, well, that&#8217;s not what our bill was intended to cover at all. Tell us how we can ensure that Little Tech companies are carved out of the legislation. </p><p>So then I went to you and I said, okay, so how can we ensure that little tech companies are actually carved out of the legislation? And you had a sort of rubric for thinking about what are the kinds of thresholds that policymakers might include that are more problematic, like from our standpoint, aren&#8217;t really going to successfully separate out smaller companies and large companies versus thresholds that might be a little more productive. So what was your starting point? What&#8217;s your anchor for thinking about how to achieve that separation?</p><p>Guido Appenzeller (01:33)</p><p>Yeah, I think it&#8217;s a great topic. I think it&#8217;s very easy to accidentally craft legislation which is aimed at a large company and so the setup that a large company can deal with it and still build a good product, but it kills a startup because they can&#8217;t, you know, don&#8217;t have the lawyers, don&#8217;t have the financial resources, you know, just don&#8217;t have the capabilities to respond to these things. </p><p>So I think it&#8217;s a super important topic. I&#8217;m on the investment side, so invest in these small startups. Then when we&#8217;re sitting in the board meetings, we see this day to day, how they&#8217;re struggling with some of this regulation. Europe has shown us how not to do it. And some good lessons there to be learned.</p><p>Matt Perault (02:08)</p><p>So can we just pause on that actually for a second before going into the specifics of thresholds? Cause I feel like we often gloss over that point or particularly in my work, I feel like I make the point and gloss over it, but you&#8217;re like literally sitting at the board table with lots of these companies. Like what is the experience that they&#8217;re having when they&#8217;re looking at these various different regulatory proposals being introduced in past like disclosure mandates or impact assessment mandates or audit mandates or things that might increase their liability on day one, even before they&#8217;ve put a product out in the market? What is the sensibility right now at the companies that you&#8217;re working with?</p><p>Guido Appenzeller (02:42)</p><p>Yeah, first of all, I think it&#8217;s important to wrap your head around what&#8217;s currently happening. These startups, specifically in the AI space at the moment, they&#8217;re often very small. So you might have seven people. There&#8217;s the entire company sitting around a table. Most of them have PhDs in computer science. None of them has any background in politics or law or anything like that. They know how to train models. They train very good models. And they often take a good chunk of the money that we invest in them and invest it in training capacity. The amount of compute infrastructure they have is actually substantial, but they have very little understanding of how to deal with these regulations. So for them, for example, writing an impact report, that&#8217;s a major thing that would basically substantially slow them down.</p><p>Does this mean they&#8217;re not set up to do this? They have to hire people for they have to work with an external firm potentially, right? So it adds a lot of complexity and really slows them down against other competitors. And then often the question is, what do you do? Can you stay out of certain markets if they&#8217;re regulated that way? Do you just not go into a market? Do you take the hit and slow down your development? These are tough conversations.</p><p>Matt Perault (03:47)</p><p>And so then a lawmaker would say, as some lawmakers have, like we&#8217;ve seen these proposals, don&#8217;t worry about it. This just applies to frontier models. So this just applies to four or five, six companies that are building at the frontier.</p><p>Guido Appenzeller (03:57)</p><p>Yeah, I think what we&#8217;ve seen is that defining a frontier model in theory sounds easy. You have this billion dollar company that has a couple of hundred people, right? So clearly they train a frontier model. So this must be easy to describe. It&#8217;s surprisingly hard to do this distinction. And I can try to sketch out a little bit why this is hard if this is how. </p><p>There&#8217;s two aspects to it. The first one is some people suggested things like, for example, can we say a total number of training operations that go into a model? Now, that sounds logical. If I want to train a larger model, I need a lot more training operations. So training operations is basically how many cycles your little chip has to spin and multiply by number of chips and so on. So you say, OK, let&#8217;s take some really large number, like 10 to the 23 or something like that&#8230;a really large number. Anything above that counts as a frontier model. Anything below that, right? That&#8217;s more of a smaller company or hobbyist or so, right?</p><p>Matt Perault (04:55)</p><p>And to be clear, this isn&#8217;t hypothetical. What you just described is the threshold in the EU AI Act. The Biden administration executive order proposes a similar threshold. like this is an operative concept. </p><p>Guido Appenzeller (05:01)</p><p>That&#8217;s right, yes. They had a draft in California. At some point, they had the same threshold. And so you look at that number, you&#8217;re like, this looks really, really big. So now in practice, that totally doesn&#8217;t work. And there&#8217;s two reasons for that. And the first one is that if you look back at the history of technology, the speed of processors has continued to grow the entire time.</p><p>If I look at the very first CPU that came out, the Intel 404, that thing could do about 100,000 operations per second. That&#8217;s a lot, 100,000 math operations per second. But the H100, which is probably the most widely used AI accelerator today, like a single chip, can do about four quadrillion operations per second. So how much is that more? Like a factor of 10 million more or something like that. So it&#8217;s a massive, massive difference in performance. [The] Intel chip was a long time ago. But at the moment, we&#8217;re actually growing faster than historically. </p><p>So if you look specifically at AI accelerators, 10 years ago, was maybe at 4 trillion operations per second. Now we&#8217;re at 4 quadrillion operations per second. So over 10 years, we went up by a factor of 1,000 in speed. So if you pass a law today with any number, it&#8217;s just a time bomb. Eventually, we&#8217;ll hit a point where a single chip can probably do this. So basically, our hobbyist at home would be able to train a model of that time.</p><p>And that doesn&#8217;t make sense. If somebody can train a model on their laptop, you don&#8217;t want to regulate that and have a proper regulation around that. That would just mean people either [ignore] the law, or it happens abroad. But it would not in practice change anything about how these models are trained.</p><p>Matt Perault (06:46)</p><p>And then one concept that we are repeatedly raised in our conversations with lawmakers that I think is, that I want to hear you talk about from the investing side is, our belief is that startups should be able to compete at the frontier. So even if you had lawmakers like repeatedly updating definitions, so it&#8217;s keeping pace with the technological advancement that you&#8217;re describing.</p><p>[I think] most people understand it will be impossible for lawmaking to keep pace, lawmaking is just a complicated process. It&#8217;s always going to lag behind technology. But just for the sake of argument, let&#8217;s assume that it&#8217;s keeping pace. Still, from our point of view, the goal is to have the frontier be a place where it&#8217;s not just a small handful of companies that are operating. And you&#8217;re living this in what you&#8217;re trying to do on the investment side of our firm. </p><p>So can you describe that a little bit? Why is it so important to us that the frontier is not left to us?</p><p>Guido Appenzeller (07:34)</p><p>Yeah, totally. At the end of the day, innovation is driven by startups. If you look at today, what are the top most valuable US companies, right? That&#8217;s companies like Google or Tesla or Meta or Microsoft or Amazon. 20 years ago, I think all but Microsoft would have qualified as a startup still or we&#8217;re not founded yet. </p><p>So what starts as small companies that become the industry leaders that drive the US economy and sort of create this leadership position in the world for the US economy. And if we want to hold on to that, we need to enable these companies to innovate. Otherwise, we&#8217;ll have a Chinese company instead taking this top spot.</p><p>So model companies, can today, like one of the beauty of the effective US capital markets is that these small companies can raise money very efficiently. So if you have a really good idea, you have the best possible team, you can grow very quickly and raise a lot of money, and then you can very quickly get a lot of compute to train these models. That doesn&#8217;t mean that you necessarily have a very large organization that can deal with a lot of complexity. But like a small, a couple of PhD students today, in some cases, they raise north of $100 million. And for that, you can train model that gets us into the order of magnitude where some people would argue is frontier.</p><p>Matt Perault (08:55)</p><p>So I think the counter we would hear in conversations with policy folks is some version of the Spider-Man, I think it&#8217;s the Spider-Man line, with great power comes great responsibility. So you&#8217;re using great power at the frontier. Therefore, why wouldn&#8217;t it make sense to treat that company as a larger entity, as a compute-based threshold might do, right? A compute-based threshold would basically be saying, if you&#8217;re using X amount of power, then for the purposes of this regulatory initiative we are treating you as a large frontier subject to heightened regulatory restrictions. Why is that not like a sensible approach?</p><p>Guido Appenzeller (09:30)</p><p>I mean, there&#8217;s two issues. The first one is, what does it mean to be treated as a frontier model? If this, the end of the day, requires a large amount of complexity, a large amount of process, some people were suggesting things like, have to hire an external auditing firm and work with them on an impact assessment and so on.</p><p>If I&#8217;m a couple of PhD students that are training a model, that slows me down massively. That&#8217;s just not something I&#8217;m set up for. If I&#8217;m Meta or Google, they can hire an army of lawyers and deal with this. And for them, it&#8217;s very easy. But for a small company, you just don&#8217;t have the resources. You don&#8217;t have the depth to do that. </p><p>There&#8217;s actually a second problem which I think amplifies that, which is that many people think of training a model as a one-time thing. It&#8217;s like building a house, and then I&#8217;m done, and now I can value [the house] or look at how much construction materials went in, and can determine the size. The problem is that&#8217;s not quite how it works in practice. Today, specifically, if I have open source models, and today many startups use open source models as part of their model training, you can basically take a trained model and from that derive a new model or on top of that train a better model. Does that make sense? I can sort of, you take a model that was trained by somebody else and then I invest more in it to make it even better.</p><p>Matt Perault (10:45)</p><p>So this is a critical concept that I think we should walk through extremely slowly. I think the model that people still have of model development is based on the sense of how open AI developed. A group of super smart people, super talented people take enormous troves of data, throw enormous compute resources on it, keep it all inside this one house, and out of that magic box, a model emerges.</p><p>One of the things that I learned in the course of the conversations that I had with you is like, that has already in a very short period of time become, the, I don&#8217;t know if it&#8217;s the dominant model still, but certainly not the sole model and not the only model that regulators should take into account when they&#8217;re thinking about what regulation would look like. Can you say a little bit more about like, that&#8217;s model A, what is model B?</p><p>Guido Appenzeller (11:32)</p><p>So what you described is true for the large frontier labs. It&#8217;s not like if you look at the average startup in our portfolio that develops an internal model, that&#8217;s not what they do. What they instead do is they take a model and build on top of that. And I think the best analogy to understand that is maybe if you look at open source software. Today, if somebody asked me on my server, what was the budget that went into the open source software that went into my server?</p><p>My answer is like, look, I have no idea, but it&#8217;s impossible to calculate. And the reason is that, for example, the server runs Linux. So how much, what&#8217;s the budget for Linux? Well, Linux was developed by thousands, 10,000s of companies. I don&#8217;t even know. They all contributed bits and pieces. And it took various meandering paths. Things were contributed back. So sort of this group effort that gets you to an end result over decades in the case of Linux.</p><p>Now, [AI] models are still younger, so it&#8217;s usually a little bit easier to trace their history. But what happens very often is that you start with an open source model that gets trained for a certain number of operations. And then a startup takes that and further fine-tunes it. So fine-tuning is basically you take a model and you train it for a specific domain area. So if I say I want to create a model for lawyers, I might start with an open source model. And then I basically adapt it to be particularly good in understanding legal texts. </p><p>So basically, then the question is, well, this derived model, how many training operations went into that? Is that just the operations that I invested? Is that the operations that the original trainer invested? Is it the sum of both operations? If I&#8217;m starting with a good open source model, which we want companies to do, then probably the combined thing is above many thresholds for frontier models. So again, we&#8217;re running into this thing. It&#8217;s like there&#8217;s no clear lines you can draw here about what&#8217;s inside one company versus another company, just like in the open source case.</p><p>Matt Perault (13:24)</p><p>So we&#8217;ve talked about compute power as one type of threshold. You&#8217;re also sort of alluding here to a second type of threshold that we&#8217;ve seen. There&#8217;s currently one significant bill pending in New York that uses a training cost threshold. So it&#8217;s sort of a complicated mixture of things, but basically there&#8217;s a per model training cost threshold, which I think is $5 million. And then there&#8217;s an aggregate threshold. So models are captured if they&#8217;ve spent $100 million in aggregate training the model. </p><p>Can you talk a little bit about why these two components of it&#8211;one is why a training cost threshold also in your view doesn&#8217;t separate out little and big, and then also like when you hear a number like a hundred million dollars for training costs, I think most people think those are [large companies] who are spending a hundred million dollars in aggregate training costs&#8211;and so I&#8217;m also interested in your assessment of that specific number.</p><p>Guido Appenzeller (14:16)</p><p>Yeah $100 million is a lot of money, I think by any metric. That said, we have startups with fairly small teams that raise more. But I think the real problem is this thing that, let&#8217;s assume I want to, say, build a particularly good model to analyze legal data. I might start with an open source model. Does this open source model tell me how much it was trained for? No, doesn&#8217;t, right? I can get the weights, but this may come from a different country or a different state. There&#8217;s no information about how much they train for. How do I even estimate my aggregate training cost in this case? If I have an open source component and then I know what my training cost is, but I don&#8217;t know what the baseline is. And whoever trained this may also not know what the baseline is because they may have used a base model themselves, they may have used training data collections that were generated by others, today we&#8217;re seeing things such as fine-tuning reinforcement learning, where basically I can use a model to generate training data and then use that training data to train a new model.</p><p>So how would you account for this? Would you only take what it takes to generate the training data or not, right? And again, the training data may be open source, so I may just grab it from somewhere. So I ended with this thing where for me as a startup, it&#8217;s very hard to manage this risk. I have a lot of unknowns here. Like somebody gives me this number, I can&#8217;t calculate that number. So what do I do? Do I say, I&#8217;m not going to offer it in the state, but maybe the state will pursue me wherever I am. Do I say, I&#8217;m just not going to train a model at all? But then I have a competitive disadvantage.</p><p>Guido Appenzeller (15:52)</p><p>And what makes all of this crazy is we&#8217;re now seeing, in some areas, actually the best open source models coming from China. There, we see very rapid innovation. In the United States, innovation in open source has slowed down in part because companies are starting to be worried about liability. If I&#8217;m releasing an open source model here, can somebody sue me for it? What are the possible outcomes? And it&#8217;s a very unhealthy dynamic, I think, for innovation.</p><p>Matt Perault (16:16)</p><p>One thing that we&#8217;ve seen in training cost thresholds is that some of them don&#8217;t specify the number of years that the training would have to occur over. So it&#8217;s one thing to say, if you spend a hundred million dollars next year, you&#8217;re liable. It&#8217;s a different thing to say if you in aggregate spend a hundred million dollars training a model, you will be liable because at some point that means every startup will be liable in some point in their history, the question is like, are they liable in a year, or five or 10 or 20 or 50 or 100? I&#8217;m interested in your take of what that timeline looks like. At this point when you&#8217;re advising companies, if you were sort of saying like, here&#8217;s, and let&#8217;s just assume they&#8217;re doing a lot of, they&#8217;re spending a lot of the cost themselves. They&#8217;re not able to utilize underlying models to address a lot of their training costs. Like when are they hitting $100 million?</p><p>Guido Appenzeller (16:47)</p><p>We certainly had startups that hit that in their first 12 months of existence.</p><p>Matt Perault (17:10)</p><p>And if you&#8217;re aiming to build a competitive model, you&#8217;re going to be hitting that in the course of a number of years.</p><p>Guido Appenzeller (17:14)</p><p>Yeah, it depends a bit what kind of model you want to build, right? For example, if you want to build a new competitive large language model, 100%, yes, you&#8217;ll be there. If you want to say, a music model that will be the other end of the spectrum, yeah, it&#8217;s a lot more relaxed, right? There, $100 million in training cost is a lot.</p><p>Matt Perault (17:32)</p><p>But this where I&#8217;m really interested in your assessment as an investor because you want to be able to invest in a whole range of competitive markets, right? Like you want the frontier to be competitive. You want more niche applications to be competitive. You&#8217;re not looking for a single vertical for investment. You&#8217;re looking for the most number possible. I assume you&#8217;re looking at the most number possible. And so the fact that in some verticals, companies might not be likely to hit a hundred million dollars for several years, I would assume is relatively immaterial to your overall investing [picture].</p><p>Guido Appenzeller (18:09)</p><p>[There&#8217;s also] a bad correlation here where the most competitive, most interesting areas are the ones where you probably need the largest investment. There&#8217;s one other thing about these sorts of total amounts for the company, [which is there&#8217;s lots of questions] we haven&#8217;t seen getting resolved yet. And I&#8217;m not sure that lawmakers thought about, like, for example, what happens in the case of a merger or acquisition, right? I have two companies that both spend 550.</p><p>What was your threshold, a hundred million? So they both spent 55 million. Can they now merge until their models are certified or how does it work?</p><p>Matt Perault (18:37)</p><p>Yeah, presumably once they&#8217;ve merged they&#8217;re subject to whatever the obligations are.</p><p>Guido Appenzeller (18:47)</p><p>Exactly, right. So these things, so you&#8217;re just introducing a lot of complexity for these companies to manage ultimately for I think, in many cases, fairly unclear payback.</p><p>Matt Perault (18:59)</p><p>Let&#8217;s talk about that specifically, because again, if I can just try to channel a policymaker perspective, I think what they would say is like, yes, we&#8217;re talking about obligations, but we&#8217;re talking about the obligations in the interest of safety. So yes, we may capture some small companies. Yes, we might make it somewhat harder for them to compete. But what we&#8217;re trying to do is come up with a proxy for what is a set of companies that have some heightened safety and security risk in the ecosystem generally, and then for companies that have that risk, they should bear more obligations. </p><p>What&#8217;s your sense in terms of how these thresholds correlate to safety in the area of model?</p><p>Guido Appenzeller (19:34)</p><p>It&#8217;s a great question. [When] people talk about safety, there&#8217;s sort of, unfortunately, no clear threat model. Different people have very different opinions of what is dangerous. So let&#8217;s walk through a couple of ones I&#8217;ve heard. </p><p>So some people are like, well, AI will take over the world and kill us all. If you&#8217;re using these models today, this is actually kind of funny, right? I mean, you have to work incredibly hard to get them to run. It&#8217;s very difficult to build the infrastructure. It&#8217;s very difficult to get them to function and practice. They make mistakes all the time. They rat-hole all the time. You have to dig them back out. They&#8217;re very, very high maintenance, right? It&#8217;s this very complex thing that you constantly have to babysit and constantly have to tweak to just get something working, right? So this is, we&#8217;re light years away from this, know, being able to do something on itself, right? Currently they can maybe operate for a couple of minutes or a couple of hours on very simple tasks, but any more complex tasks, they still fall flat on their face, right? So that&#8217;s [a threat] I don&#8217;t quite understand. It seems to be a little bit drummed up on that one. </p><p>[Some people] are worried about a model potentially helping people to do bad things, right? Can you use this to design a bio-weapon or something like that? And I think the important thing to understand is these models can&#8217;t really innovate. What a model does is, think Andrej Karpathy recently called it, they&#8217;re more like ghosts. They&#8217;re echoes of the past. We&#8217;re basically taking lots of data from humans, and these models are very effective in playing us back that data. So if I have a question about something they saw in their training corpus, then they can answer that question based on the data that they&#8217;ve seen.</p><p>But this means that essentially there&#8217;s nothing new here. Any information that&#8217;s in what these models typically train on, all of them train on the internet. So if the model is able to give me a clear answer to something, it typically means that information is somewhere on the internet and I could find it with a Google search. So that&#8217;s also a threat model I have a hard time following.</p><p>Matt Perault (21:22)</p><p>I think you&#8217;re raising understandable skepticism about the threats which is one way to think of this issue. The second part that I&#8217;m curious about is like, let&#8217;s just take it that the allegations of the existence of safety and security risk are given. Let&#8217;s just assume that that&#8217;s the case. What do you think of compute and training cost thresholds as a way to essentially draw a circle around where we think the most acute safety and security risk will come from.</p><p>Guido Appenzeller (21:50)</p><p>I mean, it comes back to this, like a more capable model is necessary. I have a hard time making a correlation between large and safety, if that makes sense. </p><p>Matt Perault (22:02)</p><p>Where large means both either compute power or the number of dollars that I put into the training process. </p><p>Guido Appenzeller (22:05)</p><p>Yeah, either or. The reason is that basically we know today if I want to solve a specialized task, I can train a very small model on that specialized task. [You can have a] model that&#8217;s incredibly wide and knows everything about the world, and is very deep, then you sort of need a large model. [With] a smaller model, I can either go not particularly smart and wide, or I can go very narrow, but then I can go deeper, if that makes sense. So let&#8217;s say, I don&#8217;t know, I want to build a model that can help me, tell me something about rockets. I could basically take a fairly small model and just fine tune it on that particular topic with that particular training data set. And then I could actually get something that&#8217;s fairly good at that. So I think size is not necessarily a good correlation for risk here. Again, it&#8217;s a little bit hard to argue about this without a clear threat model. It&#8217;s not clear that you could come up with a particular threat where the large models will be more dangerous than a small specialized model.</p><p>Matt Perault (23:03)</p><p>I also think the point that you made previously about remixing model development throws a lot of this into disarray, right? Because you can imagine a world where a model is not all that powerful or is not all that risky, even though they have spent X number of dollars on it, because maybe that&#8217;s all spent in-house on a truly proprietary model. And if there are other circumstances, it sounds like, maybe the training costs are relatively small because they&#8217;ve used a remix approach where they&#8217;re building off a model that actually is already pretty powerful on its own, so the additional compute power that they&#8217;re using; the marginal compute, the marginal training costs are relatively small.</p><p>Guido Appenzeller (23:26)</p><p>That&#8217;s right. The other question is like, look, I can get some very large, very sophisticated models from other countries at this point, right? There&#8217;s some Chinese open source models that are currently very widely used just because they&#8217;re easy to fine tune, right? They&#8217;re readily available. We have some good models from Europe with Mistral. OK, so if we&#8217;re blocking US companies from offering these models, I still can easily obtain them internationally. Bad guys don&#8217;t tend to follow rules, so if something&#8217;s trivially available to me to use, then what&#8217;s really the benefit of making it difficult for somebody in the United States to develop that. It&#8217;s very hard to really understand the threat model here that you&#8217;re trying to guard against.</p><p>Matt Perault (24:16)</p><p>And then the framing that you&#8217;re talking about now, I think gets back to kind of what the core of our policy argument is here, which is focused on regulating harmful use. Don&#8217;t focus on regulating the underlying development. If you focus on development, you&#8217;ll do things like penalize US companies without necessarily actually even addressing potential harmful applications. Instead put the emphasis on targeting that harmful use. So that actually tracks, I think.</p><p>Guido Appenzeller (24:26)</p><p>That makes a lot of sense to me. [We] have a rich history of regulating the use of technology. And I think everybody agrees that that&#8217;s an important thing to do. And I think models are not any different than any other technology. And in fact, in many cases, we already have regulation on the books that allows us to regulate these models.</p><p>If my image model generates child pornography. It&#8217;s very clear that there&#8217;s laws that people will come after me with. So I can&#8217;t do this. And guess what? Pretty much every image generation site today has a filter after running their model to make sure that doesn&#8217;t happen. And there may be flaws, but I think everybody buys into the concept and everybody tries their best to make that happen. </p><p>I think the challenge is when you say people who develop a core technology that could potentially do something, you want to regulate them. In the past, we haven&#8217;t really done that. We&#8217;ve said, you can develop a good database, a good web server, and there&#8217;s no restrictions on what you can do there. If you then serve particular websites, well, there are certain restrictions on what you can do. But that&#8217;s the use of that technology. To some degree, just following the pattern we&#8217;ve used in the past here, honestly, to me, is the most pragmatic and I think the most straightforward approach.</p><p>Matt Perault (25:47)</p><p>So this tracks with the distinction that we tried to make in this piece between thresholds that are a little bit better and thresholds that are a little bit worse. When you&#8217;re imposing a threshold to carve out certain companies, essentially what you&#8217;re saying is the way that you&#8217;re regulating [is not] targeting the entirety of the concept in the way that you might want to. So when we released this piece, I sent it around to people and I got some feedback on it. The most consistent piece of feedback I got, which I agreed with, I think was a fair criticism is why would you look to exempt companies from bad public policy? Why not get good public policy passed in the first place? And I think we agree, we want to see policies focused on regulating harmful use, not regulating development when for the most part, I think use-oriented public policy is not gonna require carve-outs for small companies. Small companies should be required to comply with the law and they can be enforced against when they do things that use AI to violate consumer protection law or other existing laws. </p><p>Instead of focusing on thresholds that are oriented around inputs, like training cost and compute, we should focus instead on thresholds that are oriented around impacts and what your presence is like in the market. So what does it look like when your product is in the market? And the thing that you and I started talking about as the more palatable, more desirable threshold that in our view would separate little and big is revenue. At some level of revenue, and we should talk a little bit about what the right level of revenue might be, but at some level of revenue a company has launched a product, it is succeeding in some form in the market, and it&#8217;s taking in money that it could divert to compliance. And so I think people would say at some point, whatever that is, your revenue levels would be so significant that you should hire a general counsel and you should hire a head of policy and you can hire a communications person and you can hire lobbying firms to support you and firms to conduct impact assessments on your behalf.</p><p>So Guido, from your perspective, can you give us a sense of how you see this evolution? Because you are dealing with companies that are maybe in some cases quite literally in the garage. They are at the very early stage, but we&#8217;re also companies in our portfolio that at some point they&#8217;ve crossed a revenue threshold where it&#8217;s appropriate for them to spend some money on compliance. And so what does that transition look like and can you in your own words talk about why you think revenue might be a better way to set a threshold in AI regulation?</p><p>Guido Appenzeller (28:09)</p><p>Yeah, first all, I completely agree with you, right? Don&#8217;t regulate the scientists and researchers, right? Let them innovate, let the innovation thrive. When there&#8217;s actually a product that hits the market and you can talk about use, that&#8217;s usually where regulation makes the most sense. </p><p>All that said, a revenue-based threshold is a lot better because if you&#8217;ve raised a bunch of money, a bunch of PhDs sitting on the table, literally, this is how our typical startup looks like. At that point, it&#8217;s very hard to deal with a complex compliance regime that can shut you down or just drastically delay what you&#8217;re doing.</p><p>If you&#8217;re at a point where you have half a billion dollars in revenue, it looks different. At that point, you need all the legal and accounting overhead. You probably have lawyers that negotiate your agreements. And you&#8217;ve created a little bit of an organization. There&#8217;s actually some exceptions. We&#8217;ve seen companies that basically get to half a billion in revenue just by accepting credit cards on their web page. And it&#8217;s still just a bunch of engineers. But in most cases, somewhere around that threshold, you really start seeing the company getting</p><p>more professional, [they&#8217;re a] large organization, and then it&#8217;s much easier for them to cope with these things. So it might still be a speed bump, but it&#8217;s at least not something that kills them anymore.</p><p>Matt Perault (29:17)</p><p>Yeah, I think that makes a lot of sense and you&#8217;re picking a number that&#8217;s not totally arbitrary, I think, because that is the revenue-based threshold in California&#8217;s new disclosure statute, SB 53.</p><p>Okay, so you&#8217;ve talked a little bit about hypothetically that you can have small teams building AI applications that will hit this number quickly. Are there any, without naming names, are there any examples that you can give that are more specific about when a company might hit a $500 million threshold?</p><p>Guido Appenzeller (29:44)</p><p>Yeah, we&#8217;ve had one company that they&#8217;ve publicly said that they reached the 500 million annual run rate, you know, after about 15 months, selling a product for 15 months. And, I don&#8217;t know the exact number, but there were tens of people at that point. So this is not a large corporation, this is still basically a bunch of engineers.</p><p>Matt Perault (30:01)</p><p>So still a tiny company. And then I think the important thing to recognize about how a $500 million threshold would work. That would mean that that company that has somewhere under a hundred people significantly under a hundred people is in the same compliance bucket as Google with over a hundred thousand employees, Microsoft, Meta, OpenAI, Anthropic. </p><p>I think that just to be fair, I think there are policymakers who would say, and that&#8217;s exactly right. That&#8217;s what we&#8217;re trying to achieve. You&#8217;ve hit that level of revenue, one of your tens of people should be included general counsel and your tens of people can afford with $500 million in revenue to hire outside firms to support your work and stuff. And I think that&#8217;s fair, but I think also it&#8217;s important to note that that company is really those kinds of companies we want to be able to compete with the larger companies and they&#8217;re likely to not have the same kind of capacity to manage a compliance framework that those much larger companies are going to have.</p><p>Guido Appenzeller (30:54)</p><p>It&#8217;s those fast growing companies that may be the [future market leaders for US tech leadership]. That&#8217;s really what we want to build for the next generation here.</p><p>Matt Perault (31:05)</p><p>So one thing that you expressed concern about with a compute power training cost threshold is its durability. Like how quickly are companies going to get those numbers and then in 10 years, the numbers are going to seem...</p><p>Guido Appenzeller (31:13)</p><p>Yep, absolutely. Any threshold in 10 years will probably be meaningless.</p><p>Matt Perault (31:19)</p><p>Yes, it will seem out of date. So what do you think about a revenue threshold like, SB 53 in California has a $500 million revenue threshold. How does that look 10 years from now?</p><p>Guido Appenzeller (31:30)</p><p>Look, we all understand inflation. So over time, we have the same problem there. But it&#8217;s much, much slower. Before a dollar today is only worth 1,000 of its value it will take a lot longer than 10 years. So it&#8217;s not perfect, but it&#8217;s only a step in the right direction.</p><p>Matt Perault (31:48)</p><p>I&#8217;m curious about your conversations with founders on these kinds of issues. Like what are the ways that these topics are seeping into their head space and how do you talk to them about these issues and how to think about them as they&#8217;re building their companies?</p><p>Guido Appenzeller (32:04)</p><p>Yeah, it depends. One thing I&#8217;m hearing in some cases is that they&#8217;re saying, you know what, let&#8217;s not expand to Europe for now, it&#8217;s too complicated. So I think that&#8217;s just a good case study of what happens if you overregulate that. We now have open source models that are no longer available in Europe, right? It says the license can be used anywhere in the world. You can use this freely except in the European Union or something like that.</p><p>If you&#8217;re trying to build a startup over there, that really hurts. You suddenly have an important piece of technology that you no longer have access to. The other thing is, in some cases, if this is the main thing that you&#8217;re doing, you&#8217;re going to go for it. You&#8217;re going to take risks. You&#8217;re just going to try to muddle through somehow. And if you get slowed down, you&#8217;re going to need to take the hit. If there&#8217;s an adjacency where you want in one direction, could go into a second one, but the second area is tightly regulated. You may not do that and then give up on that part of the market. It just means things take longer, things take more money, you lose competitiveness. These startups are often a foot race where it&#8217;s about executing faster than other companies in other countries or other companies in the same country. And whoever&#8217;s the fastest wins. So I think this is where it comes in really, the competitive dynamics.</p><p>Matt Perault (33:17)</p><p>The other sort of bizarre thing about thresholds is sometimes, you&#8217;ve raised this point in our conversations that we&#8217;ve had about it, they&#8217;re almost made to be gamed. And so you gave a mergers and acquisitions example of like, you know, maybe companies hold off when they&#8217;re at, you know, if there&#8217;s a hundred million dollar training compute threshold, they&#8217;re at 49 million and they&#8217;re merging with another company with 49 million and they sort of hold off and investing additional training, money and training to try to avoid the threshold. Is that kind of gaming something that you&#8217;re starting to hear people talking about?</p><p>Guido Appenzeller (33:46)</p><p>I think the gaming will come from the large companies, not the small ones. There&#8217;s so many things you can do where you can say, some of the training job to, instead of doing it in-house, you give that to another company, maybe abroad, and let them train something and then take back the open source model. Now you can claim you don&#8217;t know how much training effort went into this thing. That&#8217;s probably true. You can potentially distill down models. There&#8217;s many, many ways to do this.</p><p>Look, the core, regulating this kind of technology is incredibly complex. If you would tell me, Guido, can you write a perfect regulation to regulate the R&amp;D aspect of it, the honest answer would be probably, I can&#8217;t. I have absolute sympathy for any lawmaker that feels that this is difficult, because I think it really, really is. But I think, to some degree, that&#8217;s why in the past we focused on the use cases for the regulation, because those things are much easier when a particular model that does something with market applications, then we all understand how this thing should be behaving, what is good and what is bad. This is much harder than if I have a bunch of researchers in the data center staring at very large matrix multiplication and trying to regulate what they&#8217;re doing.</p><p>Matt Perault (34:54)</p><p>Guido, this was super fun. Thanks so much for doing this.</p><p>Guido Appenzeller (34:56)</p><p>Of course. Thanks, Matt.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[AI and the First Amendment]]></title><description><![CDATA[Part Three of an AI Policy Legal Primer]]></description><link>https://a16zpolicy.substack.com/p/ai-and-the-first-amendment</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/ai-and-the-first-amendment</guid><dc:creator><![CDATA[a16z AI Policy Brief]]></dc:creator><pubDate>Wed, 26 Nov 2025 15:00:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179786970/e88f36df6d87f7881bf2e72cd25a29b1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><em>Welcome to part three of our AI Policy Legal Primer. In this series, we ask a panel of leading appellate lawyers to help define the constitutional boundaries shaping how federal and state governments can govern AI. If you missed earlier installments, check out <a href="https://a16zpolicy.substack.com/p/preemption-explained">Preemption, Explained</a> and <a href="https://a16zpolicy.substack.com/p/the-dormant-commerce-clause-explained?r=1838e4&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">The Dormant Commerce Clause, Explained.</a></em> </p><div><hr></div><p>As lawmakers consider requiring companies to make disclosures about their AI models&#8212;such as risk reports, impact assessments, or content warnings&#8212;questions arise about whether those mandates could run afoul of the First Amendment.</p><p>The First Amendment protects against both restrictions on speech and laws that compel speech. For certain types of speech, courts generally permit <a href="https://a16z.com/ai-model-facts-transparency-that-works-for-little-tech/">factual, noncontroversial disclosures</a>, but are more likely to find a disclosure mandate to be unconstitutional if it compels expression on contested topics or forces companies to adopt messages they would not otherwise share. These types of mandates may also be found to be unconstitutional if they create undue burdens, such as by requiring companies to bear costs that make it harder for them to compete with their larger competitors.</p><p>In part three of our AI Policy Legal Primer, leading appellate lawyers <a href="https://www.arnoldporter.com/en/people/k/kedem-allon">Allon Kedem</a>, <a href="https://www.kslaw.com/people/paul-mezzina">Paul Mezzina</a>, and <a href="https://www.goodwinlaw.com/en/people/j/jay-william">William Jay</a> join <a href="https://x.com/MattPerault">Matt Perault</a>, head of AI policy at a16z, to explore how these principles apply to AI. They discuss recent disclosure laws, the line between constitutional and unconstitutional compelled speech, and emerging questions about whether model developers&#8217; design choices could themselves count as expressive acts protected under the First Amendment.</p><p>Understanding these constitutional limits will be essential for lawmakers seeking to craft durable, lawful frameworks for AI governance that enables small developers to compete on a level playing field with larger companies. </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;1a594c8f-30f8-44c3-943b-83e83e03d801&quot;,&quot;duration&quot;:null}"></div><p><em>This transcript has been edited lightly for readability.</em></p><p>Matt Perault (00:00)</p><p>We&#8217;ve talked a lot about dormant Commerce Clause analysis. I&#8217;d love to get your take briefly in the time we have left on another potential concern that might be raised by some of these state laws. A lot of them have disclosure related provisions as a part of them in some form. And so that raises possible First Amendment considerations. </p><p>Paul, starting with you, can you walk us through that concept? Why would the First Amendment apply to a situation when the government&#8217;s mandating certain disclosures?</p><p>Paul Mezzina (00:29)</p><p>Sure. So the First Amendment obviously protects free speech [and it prevents] laws that restrict speech, but it also prevents laws that compel speech. So laws that compel people to engage in speech they would not otherwise want to engage in always trigger some level of First Amendment scrutiny. How much scrutiny depends on what kind of speech is being compelled and in what context.</p><p>There&#8217;s a lot of nuance to this, but at a high level, the way courts think about it is if the compelled speech is purely factual and non-controversial, then it gets a much lower level of scrutiny and it&#8217;s generally going to survive under the First Amendment. So you think about all kinds of factual labeling requirements, nutritional labels on food, things like that. But beyond that, if the speech that&#8217;s being compelled by the government goes beyond the purely factual and non-controversial, it gets a higher level of scrutiny and those kinds of compelled requirements often end up being struck down. So there have been a number of cases recently, almost all of them out of California, at least the ones I&#8217;m thinking of, where the state has required companies in different industries to prepare different kinds of reports.</p><p>There was a law, for example, that required social media companies to prepare reports discussing risks to children from their services and their data practices. There was a law requiring drug companies to prepare reports that explained their pricing decisions. And there was also a law that actually was partially enjoined just today by the Ninth Circuit that required companies to prepare reports about climate-related risks from their operations. So in all of these contexts, parties have challenged the laws and said, you, the state are requiring us by requiring these reports, you&#8217;re forcing us to engage in speech that we don&#8217;t want to engage in. And it&#8217;s not just purely factual and non-controversial. It&#8217;s actually requiring us to express opinions on some pretty controversial subjects. Some of those laws have been struck down. Some have survived in the Ninth Circuit. Some are still being challenged or likely to be challenged in the Supreme Court. So this is very much a developing area of the law.</p><p>Matt Perault (02:39)</p><p>And could we look a little at some, how the doctrine might map onto some of the things that we&#8217;re seeing in AI?</p><p>So we&#8217;re seeing, I think, a few different types of things.</p><p>One would be that the company conducts impact assessments or does those kinds of assessments of its own potential security risks.</p><p>Another would be making disclosures to the government. So that could be things like certain kinds of content that are certain kinds of prompts that you receive or certain kinds of content that you produce could also be requiring a company to say something to the government about safety practices.</p><p>The third bucket would be warning labels, things that say things like this content is generated using AI, or things that say, warning, you&#8217;ve been using the service for X amount of time. need to now disclose that a provider needs to disclose to a user that it&#8217;s been using a service for a specific period of time.</p><p>So Willy, how would a court evaluate those various different kinds of transparency requirements under the First Amendment?</p><p>Willy Jay (03:36)</p><p>Yeah, I&#8217;ll add actually just one more, which would be actually saying that anything that children can access, know, the AI has to generate, whether it&#8217;s images or text or whatever, with the possible possibility that minors are reading it high in mind.</p><p>[Some] of the examples that you gave, like the simplest examples, are more like a government warning label, like the requirement that you tell people you&#8217;ve been using the AI for this amount of time. And just like there&#8217;s a Surgeon General&#8217;s warning on cigarettes, which have been challenged over the years as compelled speech and various graphic pictures of diseased lungs have been struck down, but factual statements about smoking being hazardous to your health are still on the cigarette pack.</p><p>So I think that the type of government warning or disclosure is likely to be the weakest type of First Amendment challenge. Things that basically don&#8217;t require the AI company or the model itself to mouth things that they might disagree with. So it&#8217;s clearly a message from the government and it&#8217;s clearly something that is not kind of hijacking the message or content generated by the AI itself.</p><p>Matt Perault (05:00)</p><p>And Allon, maybe for you, what about the kinds of things the government might require around disclosing information about safety practices?</p><p>Allon Kedem (05:08)</p><p>So those were some of the laws out of California that Paul was talking about earlier, where the intention was to force social media companies and others in that area to articulate what their own policies were on harm and restricting speech and the type of threats to children or others, certain types of hate speech. And one of the reasons that those laws failed First Amendment scrutiny is that just trying to come up with a clear definition of what counts as harmful speech or hate speech is usually not a neutral endeavor. And so actually the very fact of forcing a company to sort of articulate its terms in the language that the government identified was itself a change of their message.</p><p>And then you&#8217;ve got sort of even more extreme cases like the Supreme Court case out of Florida and Texas where those companies were passing laws that were trying to essentially alter what speech the social media companies would make available and push to their users. So if you used a particular type of algorithm that screened out speech that seemed like it was hate speech, that might run afoul of the Texas or Florida law. And what the Supreme Court held is that those companies had a First Amendment right or at least in that posture, they could make an argument that they had a First Amendment right to sort of choose for themselves what sort of censorship to engage in, what speech to allow users to put on the platform, and what speech not to.</p><p>There was a really interesting concurrence by Justice Barrett who said, in this instance, it seems fairly clear that the companies have made a very deliberate choice, which the First Amendment protects, not to allow certain forms of speech which they&#8217;ve decided is harmful, but you could have an instance in which artificial intelligence is used essentially to give the user whatever it is that it wants. And it&#8217;s not totally clear, she said, in that instance that there would be any sort of decision, speech-based decision, content-based decision that the First Amendment would protect, because it would essentially be just a sort of automatic consequence of an algorithm that was generated, not for First Amendment reasons, but just for essentially, you know, consumer preference reasons.</p><p>Matt Perault (07:33)</p><p>Paul, how do you think about that last point that Allon made? Like what [does] editorial discretion look like in an AI context?</p><p>Paul Mezzina (07:40)</p><p>Yeah, I think it&#8217;s really interesting. Justice Barrett&#8217;s opinion raises some really interesting questions. [I think] she&#8217;s right to say that what the First Amendment ultimately protects is human expression and expressive choices where people are trying to make decisions about the ideas they want to communicate. But where you locate that expression and whether you can have expression in the form of an algorithm is a really interesting question.</p><p>Justice Barrett sort of suggests, and this is an argument we&#8217;ve seen plaintiffs make in litigation over social media, that if a platform is using an algorithm to try to serve content that the users want, that that is somehow not expressive. And I wonder if they would apply that same thinking to sort of more rudimentary algorithms that companies have used for a long time.</p><p>Think about something like a radio station that plays the top 40, right? The producers at that radio station are not going out and choosing their favorite music to play to listeners. They&#8217;re using top 40 is an algorithm. It&#8217;s not a very complicated algorithm, but it&#8217;s an algorithm that looks at what music is popular and tries to serve that music to listeners. Now, if the government came into a top 40 station and said, we&#8217;re going to require you to play a certain amount of classical music every hour. And the reason we&#8217;re allowed to do that is because we don&#8217;t think you&#8217;re engaged in expression at all. You&#8217;re just using your top 40 algorithm. I think courts would probably have a problem with that. I think they would you know very well might say that the decision to play top 40 music is itself an expressive choice.</p><p>[In terms] of how that maps on to AI, courts when they confront new technology like to reason by analogy. They like to try to figure out what things that I&#8217;m familiar with that courts have dealt with before, does this look and feel like? What is it similar to in relevant ways? And I think developers who want to claim protection for certain aspects of model development are going to want to draw these analogies where they say, this is a conduit for my speech. Decisions that I am making in model development are expressive choices where I&#8217;m trying to influence or control the speech that is output by this model.</p><p>And on the other hand, the argument on the other side is going to be, there&#8217;s no real expressive choice happening. This is all basically the construction of a machine. And I think that&#8217;s going to be the challenge for developers is to articulate how there are expressive choices being made as part of the development process.</p><p>Allon Kedem (10:19)</p><p>One of the sort of interesting features of certain large language models is that when you ask the people who develop these models how it is that the model came to think that writing a particular response in a particular way made sense, they will often say, look, it&#8217;s a black box. We don&#8217;t actually know what it is that the model used in order to decide that this word comes most appropriately after that word.</p><p>And so in some sense, you might say, well, you&#8217;re not really making a choice that the First Amendment should respect. On the other hand, they also do usually impose constraints, like they won&#8217;t &#8275; give you an answer that violates copyright law. They&#8217;re not going to tell you the lyrics of a song, or they&#8217;re not gonna tell you the first or last page of a book just because you ask it to. And that&#8217;s a deliberate choice that&#8217;s made in the product.</p><p>Matt Perault (11:09)</p><p>This is a helpful set of tools that we can use in evaluating all the legislation we&#8217;re likely to see ahead at the federal and state level.</p><p>Allon, Paul, Willy, thank you so much.</p><p>Paul Mezzina (11:19)</p><p>It&#8217;s a pleasure, thank you.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.</em> </p>]]></content:encoded></item><item><title><![CDATA[The Dormant Commerce Clause, Explained]]></title><description><![CDATA[Listen now | Part Two of an AI Policy Legal Primer]]></description><link>https://a16zpolicy.substack.com/p/the-dormant-commerce-clause-explained</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/the-dormant-commerce-clause-explained</guid><dc:creator><![CDATA[a16z AI Policy Brief]]></dc:creator><pubDate>Tue, 25 Nov 2025 15:00:52 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179786870/ae7921189bd7fcd7a71d88df3fa4367c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><em>Welcome to part two of our AI Policy Legal Primer. In this series, we ask a panel of leading appellate lawyers to assess the state of the law in determining the respective, lawful roles of the federal and state governments in regulating AI. If you missed part one, <a href="https://a16zpolicy.substack.com/p/preemption-explained">Preemption, Explained</a>, check it out for a quick and helpful overview.</em></p><div><hr></div><p>The dormant Commerce Clause has been anything but dormant in the last couple of weeks. With Congress and the administration actively debating the proper roles of the federal and state governments in regulating AI, the dormant Commerce Clause has emerged as an important topic of date. </p><p>The <a href="https://constitution.congress.gov/browse/essay/artI-S8-C3-7-1/ALDE_00013307/">dormant Commerce Clause</a> is a doctrine that prohibits states from passing some laws that unduly burden interstate commerce or that reach outside their borders, even in the absence of federal legislation. Although it leaves significant room for states to regulate, it does impose some limits on state governance, helping to ensure national markets remain unified, rather than fragmented by burdensome, extraterritorial, or discriminatory state rules.</p><p>We <a href="https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/">first wrote</a> about the dormant Commerce Clause amidst numerous state AI proposals focused on regulating how AI models are built. In this piece, we reiterated what the Constitution outlines as the <a href="https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/">roles of the federal and state governments</a> in regulating AI: Congress should govern the national AI market, and states should police harmful uses of AI within their borders. The dormant Commerce Clause isn&#8217;t an obstacle to all state AI regulation, but a guidepost for effective AI governance that respects the important roles of both Congress and the states. </p><p>In part two of our AI Policy Legal Primer, leading appellate lawyers <a href="https://www.arnoldporter.com/en/people/k/kedem-allon">Allon Kedem</a>, <a href="https://www.kslaw.com/people/paul-mezzina">Paul Mezzina</a>, and <a href="https://www.goodwinlaw.com/en/people/j/jay-william">William Jay</a> are back to explain the dormant Commerce Clause and how it intersects with AI policy today. They discuss how courts use principles like extraterritoriality and tests like Pike balancing to weigh challenges, and examine what those frameworks could mean for recently enacted or pending state AI laws, including California&#8217;s SB 53, Colorado&#8217;s SB 205, and New York&#8217;s RAISE Act.</p><p>If you missed part one, check out <a href="https://a16zpolicy.substack.com/p/preemption-explained">Preemption, Explained</a>. And stay tuned for a conversation on AI and the First Amendment. </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;7d3e6ebd-d504-4e00-8b50-42480e6a1772&quot;,&quot;duration&quot;:null}"></div><p></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Matt Perault (00:05)</p><p>So we&#8217;ve talked about how Congress can act to preempt. We&#8217;ve talked about limits of Congress&#8217;s ability to preempt. Paul, can we bring you in here? How does the Dormant Commerce Clause figure into this?</p><p>Paul Mezzina (00:16)</p><p>Sure, the dormant Commerce Clause is a limit on the states. And it&#8217;s called dormant because it&#8217;s something that the courts have taken by implication from the Commerce Clause. </p><p>The Commerce Clause is in Article 1 of the Constitution. It&#8217;s where Congress&#8217;s powers are listed, and it&#8217;s an affirmative grant of power to Congress. says Congress shall have the power to regulate commerce with foreign nations and among the states and with the Indian tribes. So it&#8217;s written in terms of what Congress can do, not that states can&#8217;t do. But despite that for a long time, going really back to the early 1800s, the Supreme Court has read the Commerce Clause as implicitly limiting state power. So the idea is when the framers authorized Congress to regulate interstate commerce, what they were trying to do was create a national marketplace where trade could flow freely across state lines and prevent states from interfering with that national market.</p><p>They intended for Congress&#8217;s power over national commerce to be exclusive. You can see this sort of going back to, there&#8217;s a case in the early 1800s called Gibbons versus Ogden, where New York was trying to create a monopoly on steamboat traffic. And that law was actually struck down as preempted. But in that case, Chief Justice Marshall said that the law might also violate the Commerce Clause because it interfered with Congress&#8217;s exclusive power to regulate interstate commerce.</p><p>Matt Perault (01:42)</p><p>Can you talk us through recent developments of the dormant Commerce Clause? I think there is, just in terms of how people are generally perceiving it, and if you&#8217;re paying a little bit of attention to recent Supreme Court cases, but not necessarily following them line by line, there&#8217;s sort of the perception that the dormant Commerce Clause is becoming increasingly dormant. Is that accurate?</p><p>Paul Mezzina (02:03)</p><p>Well, it&#8217;s certainly accurate that it has been a source of controversy and it&#8217;s been in flux a little bit. And I&#8217;m sure, we can talk about the pork producers case, but a big case that the Supreme Court decided a couple of years ago was a case where California had banned the sale within California of pork that came from other states that was from pigs that were confined and raised in ways that California defined as inhumane. And that law was challenged by pork producers from other states who said that California was interfering with interstate commerce, the national market for pork. And the Supreme Court ultimately, ultimately upheld the California law, but they did so in a very splintered decision. And that&#8217;s created, I think, a lot of confusion about exactly what the state of the law is on the Commerce Clause. </p><p>There are clearly different groups of justices, and they overlap in sometimes confusing ways, who have different views about what the role of the courts should be in reviewing state law under this dormant commerce clause doctrine, and also different views on various aspects of the doctrine and how it should apply.</p><p>Matt Perault (03:15)</p><p>What&#8217;s your best guess if there was a state law passed and someone was challenging it on Dormant Commerce Clause grounds, what&#8217;s your best guess at how a judge would review it? Like what are the various different tests they might use to evaluate the law&#8217;s constitution?</p><p>Paul Mezzina (03:30)</p><p>Yeah, sure. So I think the doctrine has basically three branches, speaking broadly. </p><p>The first branch is a rule against discrimination. And this is something that basically all the justices agree on. States cannot pass laws that discriminate against interstate commerce. So, you know, some examples of that from sort of historic cases. You know, there was a case where New Jersey banned out of state garbage from its landfills; said our landfills are only going to service New Jersey garbage. There&#8217;s a case where Hawaii gave tax breaks to liquor only if it was made from produce that was grown in Hawaii. Those kinds of discriminatory laws are invalid. </p><p>The second branch is one that is concerned with states regulating extraterritoriality. The idea is that a state can only regulate things that happen within its borders or that have a sufficient connection to the state. The state can&#8217;t reach out and regulate what&#8217;s going on in other states. And that&#8217;s a branch of the doctrine that really is, has been thrown a little bit into confusion after the pork producers decision. I think it still has some force where states are really reaching out and regulating outside of their borders, but exactly how much force that doctrine has after pork producers is a source of debate. </p><p>And then the third branch is what&#8217;s called Pike balancing. This comes from a case called Pike vs Bruce Church. And it&#8217;s basically a test where the court asks, is this state law imposing an undue burden on interstate commerce? And the court is supposed to weigh the burden that the law places on interstate commerce against the localized benefits of the law. And if the burdens are clearly excessive in comparison to the benefits, then the court is supposed to strike down the law.</p><p>Matt Perault (05:15)</p><p>I want to get into the controversy around extraterritoriality in a second. I would love all you to weigh in on that. But before we go there, can we just talk a little bit about Pike balancing? [Judges] do balancing tests all the time. I think when you hear about a cost benefit analysis, at least I sort of think in economist terms, I&#8217;m not an economist, but I think about that as an economics exercise. How would a judge go about trying to do that in a disciplined way?</p><p>Paul Mezzina (05:42)</p><p>Yeah, it&#8217;s very difficult. And this is why, you know, as we saw in the Pork case, there is a distinct minority of justices on the Supreme Court who think that courts shouldn&#8217;t be doing this at all. They say, you know, this is really a balancing exercise that requires judges to sort of compare incommensurable values, to weigh things that can&#8217;t really be weighed against each other in any legal way that requires courts to make value judgments. </p><p>For example, if the burden on interstate commerce is that it&#8217;s harder, it&#8217;s more expensive for businesses to engage in trade and commerce across state lines, but the localized benefits are potentially greater safety for state residents or maybe even serving the moral values of state residents, as in the case of that California law where California had a moral objection to the way certain pigs were being confined.</p><p>You know, it&#8217;s a really hard question how we&#8217;re judges supposed to determine whether, for example, California&#8217;s moral interest is outweighed by the economic burdens on commerce. And I, you know, we can talk about different ways that courts have approached that, but I think one way to think about it is that courts are really looking for red flags. They&#8217;re looking for cases where either the burdens on commerce are especially severe and disruptive [or where] the benefits to the local market are really speculative. </p><p>So cases where states are imposing really severe burdens for reasons that seem far-fetched or maybe even potentially pretextual. And  just to take sort of an example, you know, there are some cases, historically, where states have imposed really problematic restrictions on, traffic through the state. </p><p>There&#8217;s a case where Iowa prohibited trucks from traveling on Iowa roads with 65 foot trailers and 65 foot trailers were basically the standard in all of the other states in the Midwest. But if you wanted to drive your trailer through Iowa, you had to either, you had to either not do it, navigate around and go through other states, or you had to transfer everything to a smaller truck while you went through Iowa. And Iowa said this has benefits in terms of local safety. And the court said, first of all, this is really burdensome. This is going to be really disruptive to commercial traffic throughout the Midwest. And also, we just don&#8217;t believe you about the safety benefits. We&#8217;ve looked at this. We think this is really far-fetched. There&#8217;s really no clear benefit. 65-foot trailers aren&#8217;t any more dangerous than 55-foot trailers, which Iowa permitted. And so those kinds of laws where the balance really seems out of whack are the ones that courts tend to be comfortable striking down.</p><p>Matt Perault (08:26)</p><p>So I want to, I think your truck example is making me want to get to an application of AI. How do we think about these laws in an AI context? But before we get there, Paul, you did allude to this controversy around how much weight to put on the extraterritorial prong versus potentially on others. wrote a piece on dormant commerce clause, where we focused primarily on Pike balancing and we looked a little bit at extraterritorial effects, but within the Pike balancing context.</p><p>Can you talk a little bit about how these two different prongs might be weighted? Like another law professor, Kevin Frazier, at the University of Texas has put a lot of weight on the extraterritoriality prong, not just from the dormant Commerce Clause, but from other constitutional principles. So how much of that survives and how would that be relevant to thinking about analysis in the AI?</p><p>Allon Kedem (09:10)</p><p>So it&#8217;s a little bit unclear how much survives and therefore exactly how it would apply. When we&#8217;re talking about extraterritoriality, we&#8217;re talking about one state&#8217;s attempt to essentially regulate conduct or commerce or something that takes place in another state. And it&#8217;s often not something that can sort of cleanly be divided into what happens in one state versus another. So you could take the example of a transaction that involves a you know, let&#8217;s say a sale between, you know, someone in Iowa selling a good made in Minnesota to someone in Maine. And the question is, what state does that transaction take place in? It&#8217;s a little bit hard to know. Or in some instances, you could have the use of a product in one state that has spillover effects into another. And so it&#8217;s not actually clear in that instance that you would necessarily be regulating the out of state conduct as opposed to the causing of effects within the state.</p><p>And for that reason, I think it doomed the argument in California&#8217;s case because of the pork producer challengers, because really what the California law applied to was the sale within California of meat that had certain characteristics. </p><p>Now, it&#8217;s true that in order to comply with the law, if you were an out of state producer, you might have to take certain steps outside the state. But nevertheless, the thing that actually puts you on the wrong side of the law happened wholly within the state of California. </p><p>You can contrast that to an instance where a state passed a law that says essentially you can&#8217;t take some step abroad. And let&#8217;s say they tried to use as an in-state hook the fact that the company just is registered to do business within the state or has employees within the state. But nevertheless, the thing that is being regulated is the out-of-state conduct. And in that instance, I think you would still have some vulnerability under the anti-extraterritoriality principle. That said, the majority in pork producers suggested, although didn&#8217;t squarely hold, that the only types of laws that are per se invalid under the principle are laws that essentially tie the price of goods in one state to the price of goods in another state. </p><p>So there were a number of laws primarily in the area of liquor sales where one state would essentially force a liquor seller to affirm that the prices made available within the state are no higher than they are made available in other neighboring states. And the Supreme Court struck those down. And those rulings are still good law, but the pork producers majority suggested that it&#8217;s possible that other laws that don&#8217;t have to do expressly with tying might not get the same treatment. </p><p>But one caveat is at the same time it said, even if those laws aren&#8217;t invalid under the dormant Commerce Clause, they still nevertheless might run afoul of other constitutional principles, including the separation of powers and the sort of inherent limitations that each state&#8217;s sovereignty imposes on the sovereignty of a sister state. And so it would infringe the sovereignty of Iowa, for instance, if Florida were to enact a law saying that people are not allowed to engage in certain conduct within Iowa because that would be an instance of Florida projecting its own sovereign authority outside its own state</p><p>Paul Mezzina (12:30)</p><p>I&#8217;ll say the way I think about it, and I think there are different views on this, and it&#8217;s not clear yet where courts are going to come out. I think one clear takeaway from the pork producers case is that a state law is not going to be invalid just because it has practical effects outside of the state. </p><p>So in pork producers, what California was regulating was the sale within the state of pork had certain characteristics that came from pigs that were treated in certain ways. But the object of regulation was the in-state sale. And the farmers were saying, yeah, you&#8217;re regulating the in-state sale, but that&#8217;s going to have a whole lot of effects on our business in Iowa and North Carolina and other states. And the court said those kinds of out-of-state practical effects just aren&#8217;t cognizable under the Commerce Clause. We&#8217;re not concerned about that. </p><p>I think on the other hand, what is still open after pork producers is when a state is actually directly regulating what happens outside of the state. So for example, if you imagine sort of a tweak on the law in the pork producers case, if instead of California saying, it is illegal to sell pork in California, if it comes from pigs that were treated in a certain way, suppose California said it is illegal to raise pigs in an inhumane way if that pork later makes its way into California.</p><p>In that case, the object of the regulation is the way the pigs are being raised in, for example, Iowa. And it still has a connection to California. But I think that&#8217;s a fundamentally different mode of regulation. And that, I think, still raises some serious questions under the Commerce Clause.</p><p>Allon Kedem (14:17)</p><p>One other thing that the Supreme Court emphasized in the pork producers case is that if there is a way to essentially change your method of production or your way of doing business in order to comply with an enacting state law, then it severely undercuts and in some instances renders impossible a dormant Commerce Clause argument. And in that case, one of the things the court pointed out is that you could simply have a different method of production for California or just choose not to sell in the state whatsoever. And therefore, it was essentially up to the pork producers themselves to figure out how to comply with the law and that engaging in commerce in a particular way, in a particular preferred manner is not something that the dormant Commerce Clause.</p><p>Paul Mezzina (15:04)</p><p>Well, just to pick up on that, because I&#8217;m interested in my fellow panelists&#8217; views on this, I think the point that Allon just made, as I understand it, is a view that was only taken by four justices and was disagreed with by five justices. But despite that, under what&#8217;s called the Marks principle, which is supposed to help you figure out when the justices are divided, what is the controlling rationale that lower courts are supposed to follow, there&#8217;s an argument that that that rationale that said there&#8217;s there&#8217;s no burden on interstate commerce if you can change your your method of doing business is controlling even though a majority of the court disagreed with it. </p><p>So it puts you in a tricky position if you&#8217;re a lower court judge. You know, am I supposed to follow this minority view because it was technically controlling or should I take into account the fact that five justices seem to have disagreed with?</p><p>William Jay (16:02)</p><p>Just one thing to observe is that some justices really don&#8217;t like the dormant Commerce Clause, but support for the idea that one state can&#8217;t regulate things in other states might not be limited to the dormant Commerce Clause. There are some alternative theories, there are some lower court cases, and even some older Supreme Court cases that spin out some alternative theories like under the Due Process Clause. We&#8217;re going to spend most of the day talking about the dormant Commerce Clause but there are some alternatives that a litigant would probably keep in their back pocket.</p><p>Matt Perault (16:31)</p><p>Can you say a little bit more about that, the due process clause? Can you explain the rationale there and then [what other] hooks to draw on beyond the Due Process clause?</p><p>William Jay (16:40)</p><p>The Due Process clause, I think, is the main one, the, you know, the whatever Justice Gorsuch meant when he wrote in the pork producers case about the horizontal separation of powers, like that&#8217;s not a specific provision of the Constitution. It might be a reference to a whole bunch of different aspects of the Constitution that keep states in their own lanes. But the one on which there is the most case law is the due process clause, which is the idea that you can&#8217;t be deprived of life, liberty and property without due process. Having a state punish you, taking away your liberty or property for something that the state has no valid connection with. If you&#8217;ve never been to Idaho, but Idaho passes a law that says if anyone who goes from Virginia to England and drinks a pint and eats stinky cheese, shall pay a fine to the state of Idaho. Idaho&#8217;s got no connection with that travel and it would violate due process to impose that kind of punishment. There have been a handful of examples often involving some international angle and not just interstate angle in the past. The Supreme Court hasn&#8217;t had to reach that in a long time.</p><p>Matt Perault (17:59)</p><p>So I think it&#8217;s good we spent as long as we have on the foundation, because it kind of gives us the analytical lens to now look at state AI laws. So I&#8217;d love to get your thoughts on some of the ones that we&#8217;ve seen. </p><p>I&#8217;ll list three, two of them have been passed, one of them is pending. And Willy maybe we can start with you to give some thoughts on, what are the kinds of elements of these laws that raise concerns and which are the things maybe that are more likely to survive a potential challenge? </p><p>So one is California&#8217;s SB 53, which is a set of disclosure requirements for model developers. There&#8217;s Colorado&#8217;s SB 205, which was passed a couple of years ago, is now set to go into effect, I believe, this summer. And the legislature has been actively considering whether to revise it in some way. There&#8217;s been pressure from the governor and the attorney general and others to reconsider it in some form. That law is primarily focused on algorithmic discrimination. Neither the California law nor the Colorado law include any provision limiting the scope of the law to development or deployment within the state. So I&#8217;m curious how you guys would assess that. And then the final one is the RAISE Act in New York, which hasn&#8217;t been signed into law, but passed the legislature. It&#8217;s on the governor&#8217;s desk for signature. And that would require developers to implement safety and security protocols, to describe in detail the testing procedure they use to assess a model&#8217;s potential for harm, and then to implement safeguards to prevent unreasonable risk of critical harm.</p><p>Willy, starting with you, what are the things in these laws that give rise to more of the kinds of concerns that might be taken into account in a dormant commerce clause challenge?</p><p>William Jay (19:36)</p><p>There are some examples in, I think, more than one of these laws that say, for example, until you have filed a set of disclosures with our state, you may not market a model that has been developed or deployed in certain ways. In other words, it purports to just be a disclosure requirement, but the teeth in the disclosure requirement are that you&#8217;re not supposed to be deploying the model anywhere, including outside the state with the state law until you&#8217;ve filed the paperwork in the state. </p><p>And while that may not seem like a very large burden, and if we were just balancing benefits and burdens, that might not be a huge obstacle. It&#8217;s still sort of the camel&#8217;s nose into the tent, that you need New York or California&#8217;s permission to go and do things in the interstate or even international market.</p><p>Allon Kedem (20:44)</p><p>And Matt, one of the things to pay attention to with these laws is whether they specifically articulate a geographic reach for the law. And if they don&#8217;t, how those states&#8217; courts would interpret the laws, because a lot of times you&#8217;ll have a law that&#8217;s sort of stated in general terms. No one is allowed to do something to engage in some activity. But nevertheless, in order to avoid dormant Commerce Clause and other constitutional concerns, the law might be interpreted as applying only to the product as deployed or used or sold within the state, as opposed to its sale, deployment, or development outside the state. And so there&#8217;s a question, I think, for at least some of those state laws, whether they are genuinely intended to reach out-of-state conduct, even if it&#8217;s not connected to any sort of direct in-state commerce.</p><p>Matt Perault (21:37)</p><p>I think this raises a really good question because there&#8217;s the question, what&#8217;s written in the law and then the realities of AI development and deployment and how those two intersect. So if you take New York, which is, I think the language is roughly developed or deployed in whole or in part within New York. then, so that is clearly a provision that limits the scope of the legislation. </p><p>But I&#8217;m curious how you think that would work with the realities of AI development. So you might have an open source developer in California, for instance, [who&#8217;s going to develop] the technology in California, but then have no control sort of by definition over its deployment, including this potential deployment in New York. You can have the reverse as well. How would you think about the realities of AI development against the provisions, a jurisdictional provision in a bill like RAISE?</p><p>Allon Kedem (22:28)</p><p>And I think this goes back to the point we were just discussing in the pork producers case about whether you can segregate your production into different markets. So in the pork producers instance, at least theoretically, it was possible for pork producers to say here is meat that is destined for the California market that satisfies California standard. And then here is a stream of production for all the other states. And there&#8217;s a question in the instance of AI technology whether you can have that same sort of segregation of markets. </p><p>In some instances, it might be possible through geofencing or other technologies to say you just can&#8217;t access the website if you&#8217;re trying to do it from an IP location in a state that has a ban on it. In other instances, it might not. If you&#8217;re talking about open source technology that essentially can be used and deployed by anyone redeployed, used, incorporated into technology downstream it may be that the AI developer loses any ability to segregate into different geographical streams.</p><p>Matt Perault (23:31)</p><p>Paul, how do you see it?</p><p>Paul Mezzina (23:34)</p><p>Yeah, I agree with that. I think a really important question that courts ask when they&#8217;re thinking about the Commerce Clause and also when they&#8217;re thinking just about personal jurisdiction is, can you be essentially found in violation of New York law without having directed your conduct toward New York? </p><p>So if you look at how courts have thought about state regulation of the internet over time, in the early days of the internet, there were some cases where courts said, you know, Vermont cannot regulate what&#8217;s on a blog or a webpage because that&#8217;s just going out to the whole world. And we can&#8217;t, if we let Vermont regulate it, that Vermont will be controlling conduct in all of the other states. And then Allon mentioned geofencing. As geofencing technology became better, you started to see courts saying, for example, there was a case involving a California law that required CNN to put closed captions on videos on its website. And CNN challenged that out of the Commerce Clause and California said, it&#8217;s fine because you can detect, you can use technology to identify when someone from California is accessing your website and you can just provide closed captions to those people and you don&#8217;t have to provide closed captions in other states. </p><p>So I think that is going to be a really important part of how courts look at AI regulation, is they&#8217;re going to ask, is this state law only going to apply to AI developers or deployers who are actively engaged in conduct within the state or directing their conduct to the state? Or is it potentially going to reach out and affect open source developers and developers who are not actually directing their conduct toward the regulating state and are just being pulled in because their product through the stream of commerce somehow made it to the state. </p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.</em> </p>]]></content:encoded></item><item><title><![CDATA[Preemption, Explained]]></title><description><![CDATA[Part One of an AI Policy Legal Primer]]></description><link>https://a16zpolicy.substack.com/p/preemption-explained</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/preemption-explained</guid><dc:creator><![CDATA[a16z AI Policy Brief]]></dc:creator><pubDate>Mon, 24 Nov 2025 16:01:54 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179605430/eef9e4467109b169db3e46aca1ef3ac3.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>There may be no more important debate in AI policy right now than how power to regulate AI should be divided between the federal and state governments.</p><p>We <a href="https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/">first wrote</a> about the respective <a href="https://a16zpolicy.substack.com/p/who-regulates-ai">roles of Congress and state governments</a> in early 2025, as we saw <a href="https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation">states throughout the country introducing bills</a> that would regulate how AI models are built. A patchwork of state laws poses a threat to Little Tech: small companies are at a competitive disadvantage with larger platforms when states like Texas, California, Florida, and New York impose different, and potentially conflicting, legal requirements. Startups don&#8217;t have the deep pockets and large legal teams that their competitors have, so they suffer when they are forced to navigate this type of regulatory landscape.</p><p>As we wrote then, there is an important role for both states and the federal government in governing AI. The starting point should be that Congress and state governments each play the role the Constitution assigns to them, with the federal government leading the regulation of interstate commerce and states policing harmful conduct within their borders. </p><p>These core constitutional principles have not historically broken down on party lines. While current AI preemption proposals have been primarily advanced by Republicans, bipartisan coalitions <a href="https://energycommerce.house.gov/posts/committee-chairs-rodgers-cantwell-unveil-historic-draft-comprehensive-data-privacy-legislation">have introduced similar concepts </a>in the past, and the Biden administration&#8217;s Justice Department <a href="https://www.supremecourt.gov/DocketPDF/21/21-468/228387/20220617195711500_No.%2021-468%20Natl%20Pork%20Producers%20v.%20Ross%20Final.pdf">raised concerns </a>about the extraterritorial effects and out-of-state burdens of California laws. It emphasized that &#8220;[t]he combined effect of those regulations [on pork production] would be to effectively force the industry to &#8216;conform&#8217; to whatever State (with market power) is the greatest outlier.&#8221;</p><p>Now, with speculation about potential Congressional action to clarify the role of the federal government in regulating AI, the same questions proliferate: What can Congress regulate? What can states regulate? And where are the constitutional limits on their respective governing powers?</p><p>We asked a panel of leading appellate lawyers to explain the state of the law on these questions. </p><p><a href="https://www.arnoldporter.com/en/people/k/kedem-allon">Allon Kedem</a>, partner, Appellate and Supreme Court practice, Arnold &amp; Porter, <a href="https://www.kslaw.com/people/paul-mezzina">Paul Mezzina</a>, partner, Appellate, Constitutional and Administrative Law practice, King &amp; Spalding, and <a href="https://www.goodwinlaw.com/en/people/j/jay-william">William Jay</a>, partner, Appellate and Supreme Court Litigation practice, Goodwin, join <a href="https://x.com/MattPerault">Matt Perault</a>, head of AI policy, a16z, to explain how federal preemption, the dormant Commerce Clause, and the First Amendment intersect with AI policy in this moment. </p><p>This is part one, focused on preemption. Stay tuned for upcoming conversations on the Commerce Clause and the First Amendment.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rh4L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rh4L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rh4L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg" width="1080" height="1035" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1035,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:452661,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://a16zpolicy.substack.com/i/179605430?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rh4L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rh4L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0848a39-6e5a-4b42-8dc6-440f14ef9bbc_1080x1035.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;88901655-c53e-428f-93c5-4a493f870b97&quot;,&quot;duration&quot;:null}"></div><p></p><p><em>This transcript has been edited lightly for readability.</em></p><p>Matt Perault (00:00)</p><p>So I think there probably is no debate in AI policy that&#8217;s more important right now than what the role is that states should play in AI governance and what the role is of the federal government. We saw this play out in a really brilliant bright way over the summer with the debates around the AI moratorium, where Congress introduced a proposal to preempt some aspects of state policy. And there were lots of questions about what was lawful and what wasn&#8217;t. </p><p>So what is Congress permitted to do and where are the bounds on congressional action? What are states permitted to do and what are the bounds on state action? So we thought it would be really helpful to gather a panel of esteemed lawyers to provide us with some guidance on those questions and understand more deeply what the legal bounds are for how we think about AI governance. So maybe we can kick it off, Allon starting with you and talk a little bit about preemption just to understand exactly what the term means and then what are the legal guardrails around what is permissible for Congress to do when it&#8217;s attempting to preempt state authority.</p><p>Allon Kedem (01:02)</p><p>Sure. So preemption is a legal principle that flows as a consequence of the Constitution Supremacy Clause, which says that the Constitution and laws made under it, in other words, federal laws, are supreme, notwithstanding anything under state law. And that means that when there&#8217;s a conflict between federal law and state law, federal law needs to prevail. And there can be several different types of conflicts and thus different types of preemption. </p><p>One, pretty straightforward one is express preemption where Congress says that states are prohibited from enacting certain types of laws or they say that federal law predominates over state law in a particular area. There&#8217;s also a form of preemption called conflict preemption where sometimes it&#8217;s impossible for federal law and state law to be complied with at the same time in which case federal law again prevails. That could be an instance where, for instance, federal law tells you, tells a regulated party to do something and state law says that they shouldn&#8217;t do it or vice versa. There can also be a conflict even at a sort of higher level of generality where federal law embodies certain policies or values that state law directly conflicts with and undermines. </p><p>And then the final area is something called field preemption, which is usually where there is some subject that is regulated by the federal government in such a complete and pervasive way that it just leaves no room for states to enact statutes in the same area, even if the state statutes aren&#8217;t directly in conflict with federal law. So any one of those could lead to a finding of preemption, in which case federal law would prevail over state law.</p><p>Matt Perault (02:42)</p><p>One thing that would be really helpful for you to clarify is whether it&#8217;s necessary for Congress to take action in a specific area in order to preempt an equivalent area in state policy. So we hear all the time things like in order to preempt state laws on transparency or state laws on liability or state laws on national security issues related to AI, Congress needs to take action in the equivalent area. So you need congressional action and transparency in order to preempt state activity and transparency. Is that the case?</p><p>Allon Kedem (03:13)</p><p>Not necessarily. So you do need some affirmative source of federal law for there to be a conflict with state law, but it doesn&#8217;t have to match the state law precisely. So it doesn&#8217;t have to be the case, for instance, that there&#8217;s a federal transparency law and therefore states can&#8217;t enact laws in the same area of transparency. It could be a federal law dealing with, you know, one subject in a very general way that has sort of knock on effects for state law. And so an example of that could be something like know, Section 230, which was a law that Congress passed sort of at the dawn of the mass internet age that had all sorts of preemptive effects on state tort law, for instance, or in some instances on state contract law, even though Section 230 itself is not a tort law or a contract law.</p><p>Matt Perault (04:00)</p><p>I think some people have in their heads the idea that like, if Congress is going to act in preemption, we need to see this like a federal AI framework. We need to see a law that&#8217;s something AI related, has lots of provisions related to lots of different AI related issues. It sounds like what you&#8217;re suggesting is like that&#8217;s one vehicle, but wouldn&#8217;t be the only way, at least legally to accomplish preemption if that&#8217;s what Congress was trying.</p><p>Allon Kedem (04:25)</p><p>That&#8217;s correct. Although, you know, one feature of preemption is that because there is a sort of default presumption that states are independent sovereigns who get to enact their own laws, it is usually advisable if Congress does want to preempt state law for them to be a little bit explicit about it, or at least to sort of make their wishes in the area known with sufficient clarity, because otherwise you run the risk that whatever they have in mind is not going to get carried out.</p><p>Matt Perault (04:55)</p><p>So Willy, that&#8217;s a good opportunity to segue to you. Could you talk a little bit about where the limits of federal authority would be? If Congress decides that it wants to preempt some state activity, is it able to do that however it wants or are there some limits to its authority to enact something that preempts?</p><p>William Jay (05:15)</p><p>The main limits come from the Constitution and they can come from basically three things. One is the extent of Congress&#8217;s power to regulate. So for example, Congress has granted power to regulate interstate commerce, but there are some things that historically the Supreme Court has considered to be too local and too non-commercial to fall within a commerce power.</p><p>Some are things that have to do with states themselves as sovereigns under the sovereignty that states retain under the 10th Amendment. And some have to do with the individual liberty that the Bill of Rights and other provisions of the Constitution guarantees. So obviously Congress can&#8217;t create a federal regulatory scheme that violates one of the protections of the First Amendment or the Fifth Amendment, for example. But for a long time, Congress&#8217;s power seemed to be basically unlimited, except where it stepped on one of those individual guarantees. </p><p>In the mid-1990s, the Supreme Court started putting more teeth back into the limits of Congress&#8217;s power under the enumerated powers doctrine, the idea that our national government has only the powers that the Constitution grants it and no more. But still in the technology space, almost everything intersects with commerce, the channels of commerce, the way that people do business. The commerce power is still very, very broad, even if there have been a handful of guardrails put around the commerce power.</p><p>Matt Perault (06:57)</p><p>So how do, can you talk a little bit about how those principles might map on to something specific that we would see in AI? So like you could use the moratorium from the summer as one example, but that doesn&#8217;t have to be the only one. Like what are the kinds of things that federal lawmakers should consider when they&#8217;re thinking about, is this legislation going to run into 10th Amendment concerns?</p><p>William Jay (07:16)</p><p>Well, one thing that Congress often does is it puts something jurisdictionally connected into its regulatory statutes. even things that don&#8217;t sound like commerce, like should there be guns near schools, Congress can put something into a statute that says in or around a program that is funded with federal dollars or things like that. </p><p>So in the context of the moratorium, for example, might think about what are the areas that are traditionally the states to regulate that a blanket moratorium might step on and that might wind up being a subject of challenge. So for example, states usually regulate the practice of medicine. And so I could see some argument that a very broad but non-specific federal law shouldn&#8217;t reach the actual sort of standards for medical malpractice, including whether doctors should or should not use AI for diagnosis.</p><p>Matt Perault (08:16)</p><p>And what about commandeering principles? Like if you&#8217;re saying no state shall, is that essentially a dictate? Like the Congress dictating to state lawmakers that they&#8217;re not able to act. </p><p>William Jay (08:27)</p><p>So there&#8217;s a difference between sort of the positive and the negative. So in other words, Congress can stop states from doing things that violate federal law. Federal law is supreme, state law is subordinate in that sense. But what Congress can&#8217;t do is enlist the states basically as it&#8217;s sort of cat&#8217;s paws. </p><p>So you can&#8217;t say in a federal law, the states shall pass their own laws that do X. So you can say, no state laws that look like X, but you can&#8217;t say the states shall pass laws that do X, nor can you enlist state officials, which could include everything from, know, sheriffs to boards of education, to boards of regents, in doing the hard work of enforcing federal regulation if they don&#8217;t want to be. </p><p>And this has come up in the immigration context quite a bit recently, where states and localities have said, our cops will not enforce federal immigration law. The federal government doesn&#8217;t like that. And there have been, there&#8217;s been pushback about that. </p><p>But, the basic principle is what you alluded to, Matt, the idea that you can&#8217;t, federal government can&#8217;t commandeer the states into doing the national government&#8217;s work. It has to do that itself.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately. </em></p>]]></content:encoded></item><item><title><![CDATA[Who Regulates AI?]]></title><description><![CDATA[Watch now | The balance of power between Washington and the states could define America&#8217;s AI future]]></description><link>https://a16zpolicy.substack.com/p/who-regulates-ai</link><guid isPermaLink="false">https://a16zpolicy.substack.com/p/who-regulates-ai</guid><dc:creator><![CDATA[a16z AI Policy Brief]]></dc:creator><pubDate>Thu, 20 Nov 2025 14:01:55 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179330362/dd7b5a763aa7324c101e1415e865870d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><em>Welcome to the first in a series of conversations from the a16z AI Policy Brief featuring policy experts, researchers, and builders on the forefront of AI development. Each discussion explores ideas and debates shaping AI public policy today. </em></p><div><hr></div><p>This question has been at the center of policy debates since AI adoption began to soar in the United States and around the world. States have introduced <a href="https://www.multistate.ai/updates/vol-71#:~:text=By%20our%20count%2C%20this%20year,full%20edition%20of%20this%20update.">more than 1,000 AI bills</a> this year alone, spanning Alaska to Florida.</p><p>Many of these state laws would impose paperwork-heavy compliance regimes aimed at how AI models are built, rather than targeting the harmful uses of the technology. As <a href="https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/">we&#8217;ve written previously</a>, a patchwork of state AI laws could fracture the national market, hit startups with burdensome compliance costs that make it harder for them to keep pace with larger companies, and undermine US competitiveness in the global AI race.</p><p>The <a href="https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/">Constitution divides power</a> between the federal and state governments for exactly this reason. States have authority to police harmful conduct within their borders. In AI policy, this means that states should regulate harmful in-state uses of AI, like fraud and consumer protection. But Congress governs the national market and interstate commerce. When states attempt to set rules for how AI models are built, they risk exceeding their constitutional authority and setting national standards that bind the entire industry. A state like California, New York, Texas, or Florida could set the rules for the nation.</p><p>In this conversation, Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, joins Jai Ramaswamy, chief legal and policy officer, and Matt Perault, head of AI policy at a16z, to discuss why the question of <em>who</em> regulates AI is just as important as <em>how</em> we regulate it. </p><p>They explore the constitutional guardrails that keep states and the federal government in their respective roles, the real-world risks of a fragmented AI market, and why keeping rules for AI development consistent across 50 states is critical to America&#8217;s ability to lead in AI. It&#8217;s a dynamic conversation that ranges from today&#8217;s AI bills to the Framers&#8217; debates about federalism. If you&#8217;ve ever found yourself in an animated dinner party discussion about the Articles of Confederation, this one&#8217;s for you.</p><p>Key takeaways from the conversation:</p><blockquote><h2>Good intentions, wrong target</h2><p><em>&#8220;...all of this amounts to state legislators who pride themselves on being very close to their constituents and who really want to be responsive to public policy concerns feeling as though this is a moment where regulating first and asking questions later is a wise strategy, which isn&#8217;t necessarily the way good public policy gets made&#8230;&#8221;</em></p></blockquote><p>State lawmakers may want to be responsive to concerns from their constituents about the impacts of AI. These valid questions from voters deserve attention. But sweeping bills that aim to set national standards for how AI is built rather than how it&#8217;s used within a state&#8217;s borders cross the line of state authority and may not actually make residents any safer. The group discusses recommendations for how states can target harmful local uses of AI.</p><blockquote><h2>Regulating how models are built reshapes the national market</h2><p><em>&#8220;But the issue arises when states start to go up the tech stack, when they&#8217;re no longer dealing just with AI deployers and instead are trying to regulate the developers themselves, when they&#8217;re trying to interfere with the actual technology, how AI is being trained, what thresholds have to be crossed before it&#8217;s deployed, all of these questions that I think for a lot of folks would agree amount to national questions.&#8221;</em></p></blockquote><p>States have long had the power to govern the use of any technology within its borders&#8212;for example, deciding how technologies show up in schools, hospitals, or public spaces. But when they start setting rules for how AI models are built, they&#8217;re effectively defining the underlying technology. Particularly considering the way AI models are developed today using open-source and remixed models, that becomes a national question. Regulating use protects people; regulating the technology itself reshapes the national market.</p><blockquote><h2>National technologies need national rules</h2><p><em>&#8220;We are not going to survive as a nation if we persist under our current fragmented economic approach&#8230;This needed to be reserved to the national government because to have a fragmented market system was going to undermine the viability of the country itself.&#8221;</em></p></blockquote><p>The group traces today&#8217;s AI debate back to the founding era. Under the Articles of Confederation, fragmented state policies threatened to split the nation apart. The Constitution&#8217;s Commerce Clause solved that by giving Congress&#8212;not individual states&#8212;the power to regulate interstate commerce and preserve a unified national market. That same principle applies today: AI models move across state lines, and regulating these systems as a piecemeal patchwork risks repeating the very division the founders warned against. </p><blockquote><h2>Local stakes, national benefits</h2><p><em>&#8220;These are questions of profound significance to voters right now. If you go and ask a voter, what is the most important thing you&#8217;re looking for in 2026? It&#8217;s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What&#8217;s the thing that can start to improve our transportation systems, our energy systems? I can keep going&#8230;&#8221;</em></p></blockquote><p>Kevin reminds us that decisions about how we regulate AI will affect how we live: it will have significant implications on the cost-of-living, productivity, and societal progress. If we put it to use in the right ways, AI can lower prices and improve accessibility for everyday services, from healthcare to education to transportation. Overly broad state laws that slow AI development ultimately make these benefits harder to reach. Getting this balance right impacts real economic outcomes for the people lawmakers serve.</p><blockquote><h2>Global competition raises the stakes</h2><p><em>&#8220;&#8230;you&#8217;re destroying any hope of competition in this realm, of creating a national market where small and big players can compete on a level playing field.&#8221;</em></p></blockquote><p>While US developers navigate conflicting state laws, competitors abroad, including China, are moving fast to standardize and scale. While states are making it harder for startups to keep pace with their deep-pocketed competitors, China is fostering a dynamic innovation ecosystem that enables new companies to emerge. Fragmentation at home means losing ground abroad. To stay competitive, the US needs a consistent national framework that lets builders innovate at speed and scale.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://a16zpolicy.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://a16zpolicy.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;0a6180b8-28e0-40a0-90b4-830143610aa1&quot;,&quot;duration&quot;:null}"></div><p><em>This transcript has been edited lightly for readability.</em></p><p>Kevin Frazier (00:00)</p><p>Montanans didn&#8217;t like California, Californians don&#8217;t like Floridians, Texans don&#8217;t like anyone.  I&#8217;ve lived in all of these places and no one wants to be under the thumb of any other state.</p><p>Kevin Frazier (00:12)</p><p>If we let California redesign the fundamental engine of all of our cars, for example, that&#8217;s gonna lead to nationwide chaos.</p><p>Jai Ramaswamy (00:19)</p><p>And this gets us to the question of really who regulates AI becomes as important as how we regulate AI, because in some senses, the who determines the appropriate how.</p><p>Kevin Frazier (00:29)</p><p>As soon as you are admitting to trying to take the role of Congress, whether or not you think Congress should or should not be doing something, you are exceeding the authority of the state.</p><p>If you go and ask a voter, what is the most important thing you&#8217;re looking for in 2026? It&#8217;s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What&#8217;s the thing that can start to improve our transportation systems, our energy systems? I can keep going, but we only have so much time.</p><p>Matt Perault (01:05)</p><p>Kevin, Jai thanks for joining this conversation today.</p><p>Kevin Frazier (01:08)</p><p>Thanks for having us, Matt.</p><p>Jai Ramaswamy (01:08)</p><p>It&#8217;s great to be here.</p><p>Matt Perault (01:10)</p><p>So the focus of our conversation is state AI policy. And I think that&#8217;s the focus of the conversation because states are really at the forefront of AI regulation. We have seen very few AI bills at the federal level, but lots and lots at the state level. So Kevin, can we start with you? Can you give us an overview of what we&#8217;re seeing, both in terms of volume and kind?</p><p>Kevin Frazier (01:31)</p><p>Yeah, so if we just look at the sheer volume of state legislation, you&#8217;re going to need to clear off your bookshelf because there&#8217;s approximately 1100 bills pending before the state legislature in 2025 alone, which is just insane. And we could debate the definition of AI related bills for a long time. But the fact of the matter is we&#8217;re seeing folks from Alaska all the way to Florida debating how to regulate AI. And I think a lot of this just has to do with the immense uncertainty that people perceive existing in the AI space, right? They read headlines about job displacement occurring in tech, occurring in the creative industries, occurring for just about any profession that has to do with technology. And they&#8217;re fearful about what am I going to do for my constituents in this regard? They&#8217;re hearing from constituents about the energy usage of data centers and the water usage of data centers. </p><p>And there&#8217;s obviously very coordinated constituencies that have strong environmentalist interests. And then of course you&#8217;re hearing folks about, some of the child safety issues that are arising as we see AI companions become more and more ubiquitous, depending on who you ask, which I&#8217;m sure we, may get into. But all of this amounts to state legislators who pride themselves on being very close to their constituents and who really want to be responsive to public policy concerns feeling as though this is a moment where regulating first and asking questions later is a wise strategy, which isn&#8217;t necessarily the way good public policy gets made, but in terms of talking points when you&#8217;re running for reelection in 2026, saying you did something about AI is a pretty good message to share. </p><p>And so I think what we&#8217;re seeing is just a natural response of a group of state legislators who want to be known as the AI person in their community who are trying to stake out some sort of territory for taking an affirmative response in opposition to what a lot of people I think would say we did with respect to social media. So I often say that this is sort of the social media hangover phase where everyone&#8217;s saying, all right, we will not get tech wrong again. So the best thing to do is to just jump on it and hope for the best. And that&#8217;s playing out right now.</p><p>Matt Perault (03:49)</p><p>So obviously 1,100 bills is a large number and there are a range of different types of bills included in that number. Can you give a sense on one side of the spectrum, like what are the bills that you see as positive or benign and then what are the kinds of bills that you&#8217;re tracking that you think raise more concerns?</p><p>Kevin Frazier (04:05)</p><p>So I think that the legislation that errs on the side of sort of, let&#8217;s do it. I think it&#8217;s positive. I think it&#8217;s responsive to the proper role that states are supposed to play are in these sensitive use cases. So when we&#8217;re talking about, for example, how should a doctor use AI in a medical setting in a specific community? That is something that is very much a state question of asking what sort of medical services do we want provided? How do we want to make sure people are receiving care?</p><p>That&#8217;s something that I think is naturally within the ambit of states. Same goes for a lot of these educational questions. When do you want to see an AI tool being deployed in a K through 12 education setting? There&#8217;s no right answer to that question. There&#8217;s still a lot of unsettled debates about when and how to introduce kids to AI in an educational context. So states trying to figure that out and mapping requirements onto school districts, for example, makes a heck of a lot of sense. And we&#8217;re seeing that these bills are popping up across the country. </p><p>But the issue arises when states start to go up the tech stack, when they&#8217;re no longer dealing just with AI deployers and instead are trying to regulate the developers themselves, when they&#8217;re trying to interfere with the actual technology, how AI is being trained, what thresholds have to be crossed before it&#8217;s deployed, all of these questions that I think for a lot of folks would agree amount to national questions because what we&#8217;re fundamentally tinkering with here with respect to bills like SB 205 in Colorado with depending on who you ask SB 53 in California and some of these more onerous state pieces of legislation is changing the fundamental way that AI is going to develop. And that&#8217;s a national question in my opinion, because it&#8217;s somewhat akin to saying that one state has the authority to redesign the engine.</p><p>If we let California redesign the fundamental engine of all of our cars, for example, that&#8217;s gonna lead to nationwide chaos. If you want to instead change the speed limits in California or change the threshold before you can have a driver&#8217;s license in California, that&#8217;s fine. You&#8217;re not changing the underlying technology itself.</p><p>When states start to regulate how the technology itself is developed, then I think we see states interfering with what ultimately is a question that should be left for Congress.</p><p>Jai Ramaswamy (06:35)</p><p>Yeah, I think that&#8217;s a really good way of framing the question. I think the real issue is, what extent the activity that we&#8217;re seeing reflects a genuine concern about protecting the citizens and residents of a particular state versus something more, which is like, look, the federal government hasn&#8217;t acted. And so we are going to step into the shoes of the federal government to push them along, maybe be the first in the marketplace of ideas, whatever the motivation is. The former bucket seems to me to be exactly the type of things that states should be doing. And the latter really starts kind of getting into impermissible areas. </p><p>I guess, Kevin, the only thing that I would add to what you said, which I thought was right on point, that stepping back for a second, and I always put on my historical had in a previous previous vocation. I was a historian of political thought. And now I do that as an avocation, not as a vocation. But I think people forget sometimes that the federal government is a government of enumerated powers. And what that means is that the powers that it can exercise are enumerated in the Constitution or are direct implications of what&#8217;s enumerated in the Constitution. Whereas states have historically been seen as exercising plenary police powers, meaning that they have a broader range of things that they can legislate and regulate in terms of human conduct within their borders. And I think this is where the debate gets super interesting, because I think that when you map that kind of historical, and we should realize that that division was put in place for a very specific reason, which is that in the early days of the Republic, when we had the Articles of the Confederation, you had states that could have gone to war with each other because some were blockading other states, that there was actual commercial restrictions being put on each other that could have led to the dissolution of the union. And that&#8217;s what gave rise to the Constitution itself. So this is actually a foundational issue, I think, within the larger structure of our government. And I think that the real question is, to what extent are the issues that the states are concerned about, really about their own citizens&#8217; concerns and to what extent they&#8217;re impinging. Do you have a sense? I guess it&#8217;s kind of an unfair question, but if you had to sort of game how much of the 1,100 bills that we see, how many of them are really motivated by genuine concerns of exercising plenary state power under the 10th Amendment versus really enumerated powers under the&#8230;Constitution, what would you say, like, if you had to sort of game it out? Probably an unfair question for you, but...</p><p>Kevin Frazier (09:23)</p><p>Oof. You know, yeah,</p><p>I don&#8217;t know if I&#8217;ll be able to be as precise. I tend to avoid the P-Doom-like just guesstimates of where my thoughts land on certain things. I do think that the vast majority of those bills are narrow, are directed towards police power type matters, are trying to be responsive to what are regarded as local concerns. And I think as you pointed out, Jai, that&#8217;s within the ambit of the state to make sure that they are responsive to truly local concerns. </p><p>But Jai, as you teed up, not enough people are paying attention to the fact that some of the sponsors of these bills are explicitly stating, we think we need to act on behalf of the American people and pass this legislation to protect them from X, Y or Z. And that is fundamentally not the authority of the states, right? As soon as you are admitting to trying to take the role of Congress, whether or not you think Congress should or should not be doing something, you are exceeding the authority of the state. And to admit so blatantly that you think your state should be the one that acts as this sort of national protector is just wild because I&#8217;ve lived in at least seven states. I actually lose count of how many states I&#8217;ve lived in. And I can tell you that each one of those states would hate to adhere to the laws of another state. Montanans didn&#8217;t like California, Californians don&#8217;t like Floridians, Texans don&#8217;t like anyone.  I&#8217;ve lived in all of these places and no one wants to be under the thumb of any other state. And yet we&#8217;re seeing that these apparently wise, beneficial and benefactor state legislators saying, don&#8217;t worry the rest of the country, we&#8217;ll do it on your behalf. </p><p>And Jai, I appreciate you bringing up the Articles of Confederation because I&#8217;m explicitly banned from being the first to bring it up. My wife says I just have to stop talking about the Articles of Confederation. But you are so right that we have lost sight of what actually motivated the founders to move away from the Articles of Confederation. Now, I&#8217;m going to raise you one. You mentioned the Articles. I want to raise you the Annapolis Convention, right? This took place before the Constitutional Convention. We had just a handful of states get together and these were the biggest states, mind you. Virginia was there, right? Pennsylvania was there, New York was there. And they realized our economic state is deplorable. We are not going to survive as a nation if we persist under our current fragmented economic approach. And that&#8217;s what teed up the Constitutional Convention. And so for folks not to realize that when we&#8217;re talking about the commerce clause, as I know we&#8217;re going to dive into, they were very, very intentional that this needed to be a congressional power. This needed to be reserved to the national government because to have a fragmented market system was going to undermine the viability of the country itself. so having that context is important. </p><p>But I also just want to add one other thing, which is a quick quote from a North Carolina legislator. Just to frame the popular understanding of how the founders regarded state governments themselves. So this is from James Airdale of North Carolina, who said that the North Carolina state legislature of which he was a part passed, quote, the vilest collection of trash ever formed by a legislative body. Yeah, you think we got trash. </p><p>Jai Ramaswamy (13:00)</p><p>And we think it&#8217;s bad now, right? We think our politics are bad now.</p><p>Kevin Frazier (13:05)</p><p>They got bags full of trash, streets littered with it. They were not like, you know, state governments, they&#8217;re super on the up and up. Let&#8217;s defer to their wisdom on national matters of economic concern. And I&#8217;m not saying that state legislators aren&#8217;t hardworking, that they&#8217;re not attentive to important considerations, but there&#8217;s a reason why the founders said we need to cabin off and clearly designate who&#8217;s responsible for what.</p><p>Jai Ramaswamy (13:31)</p><p>The only thing I would add there, and I think this is lost, is that I think that it&#8217;s fair to say the founders believed that local and state governments were, in a sense, the place where the people would be represented the most. Their interests would be more closely represented, and the federal government would, in a sense, have a harder time being close to the will and wishes of the people. Where I think that the federal government steps in, and this is key because we&#8217;re going to be talking about the global implications of this in a minute.</p><p>Back then, the issue was if we started to separate on commercial terms, it may very well be that some of the smaller states would gravitate to Great Britain and the sort of commercial empire that Britain had built and destroy the promise of a unified national state. And so the reason for the federal government having this kind of lock on interstate commerce was as much an international issue as it was a domestic issue. It was that to present ourselves as a united set of colonies and now independent states in a single national entity, the national government had to be able to set policies that would, in a sense, drive international commerce. That was sort of the key. But Matt, I think that that&#8217;s fair to say that that&#8217;s kind of what&#8217;s a concern here as well, right? I mean, that we&#8217;re worried about global implications of falling behind in the race for AI.</p><p>Matt Perault (15:00)</p><p>Yeah, I think that seems exactly right. I mean, I&#8217;m trying and it&#8217;s not a role I do easily, but to put on my state lawmaker hat. And it seems like there are two senses of duty that a state lawmaker might have when they&#8217;re thinking about AI policy. The first, which I really understand, is I was elected to office. I had a policy agenda or set of values in the world that I wanted to project once I got into office. And I want to legislate and then I want to see those laws enforced. And I think when we saw the moratorium debate over the summer, and we supported the moratorium, but I also understood state lawmakers hearing the term moratorium and it sounds like it undermines that duty that they have that like we, was elected and I want to do something good on the policy side. The second duty, I think that state lawmakers are feeling an AI now is one that Kevin alluded to, which is Congress hasn&#8217;t acted. And so therefore we need to act in place of Congress. And I think that&#8217;s what we&#8217;re talking about now, the sort of muddled terrain between what is the role of a state lawmaker, what really should be the domain of Congress. And as we&#8217;ll discuss, the fact that Congress has or hasn&#8217;t acted doesn&#8217;t really change the remit of a state.</p><p>Jai Ramaswamy (16:09)</p><p>Yeah, think that&#8217;s right. And this gets us to the question of really who regulates AI becomes as important as how we regulate AI, because in some senses, the who determines the appropriate how. I think it&#8217;s fair to say that states in this realm have historically, in the realm of technology, in the realm of commerce, have historically regulated misuses of various technologies, of various instrumentalities. Whereas the federal government really has a bigger role to play when we&#8217;re talking about setting national standards. To your point, Kevin, about cars, regulating an underlying technology that would be very difficult for 50 states to regulate consistently and create a national market for, as opposed to the federal government doing. So I think that that is...is part of it. And this may be a good way to of segue into kind of all the nerdy terms, know, the preemption debates and the dormant commerce clause. Because I think as Matt pointed out, the issue that I think raised a lot of hackles with respect to the moratorium was that it was called a moratorium, which seemed new and novel and something that we don&#8217;t do in the United States. We have preemption, but you what&#8217;s a moratorium?</p><p>On the other hand, I think the other thing was a bit of, I don&#8217;t want to call it misinformation, because I don&#8217;t know if it was intentional, but a misunderstanding of even that legislation. Because I think there was even under that legislation, a role for states, and it didn&#8217;t purport to put a moratorium on all forms of regulation, only on certain forms of state regulation.</p><p>So Kevin, it would be great to, if you could kind of give the audience maybe a, I don&#8217;t know, 101 on preemption and the dormant commerce clause. That&#8217;s what we&#8217;re talking about here. And for legal nerds, that&#8217;s what we&#8217;re talking about. But it would be great for you to do that.</p><p>Kevin Frazier (18:13)</p><p>Yeah. I&#8217;d happily do that. And I welcome you all to fill in blanks as well, because it is a really complex topic. And I think part of what adds to the complexity is I think of our interpretation of the Commerce Clause, like the worst game of telephone that&#8217;s ever been played. One justice whispered to the next, like, this is how you interpret it. And then, you know, the next justice and so on and so on. And so our Commerce Clause jurisprudence is so muddied and so murky that folks just aren&#8217;t sure how to actually interpret. Under the Commerce Clause, Congress has the ability to regulate interstate commerce. And that has been interpreted myriad ways since the founding. We&#8217;ve seen, for example, some formalistic tests saying, okay, what qualifies as commerce? And there are whole debates about how to define commerce and what actually qualifies as a commercial activity.</p><p>We&#8217;ve also had debates about what does it mean to regulate commerce among the several states? You can find whole law review articles, and I wish I was kidding, just analyzing what does the word among mean and why is among different from between? And you can lose your mind in that debate. Then there&#8217;s the really tricky issue of if we afford Congress the ability to regulate interstate commerce.</p><p>What does that leave for states with respect to interstate commerce? And there are kind of two main ways of thinking about this. One way of interpreting the commerce clause is to say that Congress alone has the authority to regulate interstate commerce. So if something qualifies as interstate commerce, that is exclusively the domain of Congress. Now, another interpretation would say that so long as Congress has not affirmatively acted to regulate in some way. States may regulate interstate commerce so long as they are not violating any other constitutional principle. This has also been reinforced over time in various Supreme Court opinions that have allowed Congress to basically bless state intervention into interstate commerce. So basically saying, look, we may not want to affirmatively act on a nationwide scale on some matter of interstate commerce but we&#8217;re going to grant, essentially extend our authority to a state to do so. And so over time, having this blend of is the Commerce Clause power exclusive to Congress or is it concurrent is a very tricky question. Now, the dormant Commerce Clause is the understanding that even if Congress has not acted, there may be judicial authority to strike down state laws that would interfere with that realm of what we think should be left to Congress alone. So for example, we have seen laws early on in the founding passed by state legislatures that courts regarded as regulating interstate commerce and therefore striking down those laws because they interfered with a domain that the courts interpreted as being left to Congress itself.</p><p>Now, traditionally, we think of a couple key categories of what sort of laws may violate the dormant Commerce Clause. The first is all about protectionism, right? We very much don&#8217;t want to see states exclusively favor in-state interests over out-of-state interests. If we see that sort of protectionist legislation, the Supreme Court has been very clear of saying that&#8217;s going to interfere with the sort of national market we were talking about earlier.</p><p>Jai Ramaswamy (21:57)</p><p>And that would be sort of a that would be a clear case, I assume of like, I don&#8217;t know, Kentucky saying, we&#8217;re just not gonna allow out of state pork in Kentucky. Like that&#8217;s just not gonna be allowed.</p><p>Kevin Frazier (22:11)</p><p>So it depends, right? So if we see that in-state and out-of-state producers are placed on uneven ground, right, or are treated differently, that&#8217;s really where I&#8217;d say we start to see the difference that raises constitutional flags under the dormant Commerce Clause.</p><p>Jai Ramaswamy (22:27)</p><p>In other words, it doesn&#8217;t have to be explicit discrimination saying an out-of state. It can also be, in effect, discrimination.</p><p>Kevin Frazier (22:33)</p><p>So we can have, if it is facial discrimination, that&#8217;s almost certainly going to be struck down under the dormant Commerce Clause. For those facially neutral laws that just tend to have the effect of favoring in-state interests over out-of-state interests, then we run into a very tricky question of whether that burden imposed on interstate commerce is tolerable.</p><p>With respect to the local gains we&#8217;re seeing as a result of that burden imposed on interstate commerce. And we can dive more into that inquiry in a second. So just to pause for a second, we&#8217;ve got our commerce clause, we&#8217;ve got our theories of, okay, is this exclusive or is this concurrent? We&#8217;ve got our dormant commerce clause where we&#8217;re concerned about favoring in-state interests over out-of-state interests and therefore disrupting the &#8275; interstate commerce and the national economy.</p><p>And then we have this murky kind of third category that&#8217;s hanging out there, which is extraterritoriality. And this is where we see one state explicitly try to regulate commerce that occurs entirely in another jurisdiction. And this has also been declared unconstitutional. Some people would cabin that under the dormant commerce clause theory.</p><p>Other folks would say it exists both under the dormant Commerce Clause and it&#8217;s an amalgamation of the Full Faith and Credit Clause, the Due Process Clause. You can even say the Guarantee Clause, right? It&#8217;s just protected by a whole hodgepodge of things. So across all of those domains, we are left with a very muddied picture that unfortunately the court has, if anything, made even less clear over time. And we are in a period of significant debate about when and how states can regulate with respect to interstate commerce.</p><p>Jai Ramaswamy (24:26)</p><p>That&#8217;s interesting. Matt, I know you and I have talked a lot about these different prongs. Where do you think, when we see AI legislation, where do you think it falls in this hodgepodge of different theories?</p><p>Matt Perault (24:40)</p><p>So in our piece, we focused on Pike balancing, which is the excessive burden prong. So that&#8217;s the idea that if the out-of-state costs substantially outweigh the local and state benefits, then a law is unconstitutional. I think it&#8217;s interesting that Kevin, who&#8217;s written extensively about this issue, has focused on the extraterritoriality prong. As you said, either part of the dormant Commerce Clause analysis are separate. So I&#8217;m curious, Kevin, how do you think about that working in practice? Because like, economies are connected now, it&#8217;s likely that anything that a state is going to do is going to affect out of state commerce in some way. How do you think about the extraterritoriality prong being actionable in practice?</p><p>Kevin Frazier (25:16)</p><p>Yeah. So for me, I focus most of the extraterritoriality analysis less on the economic questions of being able to parse out exactly when a state is wholly regulating commerce in another jurisdiction. Because as you pointed out, Matt, data today, for example, if we just look at how data is stored and transferred and used across basically every state, the argument could be made that any law that deals with data is extraterritorial because at some point that data is going to live in a server that&#8217;s in another jurisdiction and you are regulating, state A is regulating how that data is processed or stored or managed in state B. Now we can see that across a lot of questions, whether it&#8217;s pollution, for example, and some of the environmental laws, whether it is workplace specifications that are often really hard to cabin to one jurisdiction.</p><p>Extraterritoriality can get really murky if you focus mainly on the economic question of trying to segment, finally, is this in one jurisdiction or is this in another? For me, extraterritoriality is most strongly based in the guarantee clause and in the due process clause and the full faith and credit clause. But I want to focus mainly on the guarantee clause. Now, for folks who are not steeped, in constitutional history and as nerdy as the three of us are, the guarantee clause basically says that the U.S. government, and it&#8217;s really important to specify this, the government, it doesn&#8217;t say the judiciary, it doesn&#8217;t say Congress, it doesn&#8217;t say the executive, which is very bizarre, right? Anyone who&#8217;s a constitutional scholar or anyone who&#8217;s just answered a multiple choice question on the U.S. Constitution generally thinks of us cabining each power to each specific branch.</p><p>But the guarantee clause says the government will ensure to every resident a Republican form of government. Now again, to go back to how the founders thought about the government and how they thought about the proper relationship between an individual and their government, they weren&#8217;t big on kind of virtual representation of, don&#8217;t worry colonists, we&#8217;ll represent your interests here in the UK. Trust us, you know. Taxes, I know the Stamp Act sounds really bad, but it&#8217;s in your best interest. We&#8217;ll represent you over here. Just trust us. We fundamentally broke up with that concept. And if you do not appreciate the transition from our status as a colony to the Articles of Confederation to the Constitution as being so fundamentally grounded in Republican governance, then I think you may have missed the first class of AP U.S. history.</p><p>And maybe you need to go back and take that one because that is the core of our document is to say you, people, we, people can hold the people who have power over us accountable. yet, Jai, you alone among us three get to vote on what California is doing. Meanwhile, me and Matt are just like, all right, if Gavin Newsom wakes up one day and wants to be really aggressive on AI policy, great, we&#8217;re going to have to live with those consequences. And if he happens to veto a couple of bills, great, we didn&#8217;t have any say on that either. And now we&#8217;re seeing a really kind of radical and we may get there or we may not, but I would encourage everyone to check it out. California AI Ballot Initiative, where the Californians themselves may soon opt to pass pretty expansive and onerous AI regulations that will again impact me and Matt. And we have no authority, no ability to go into that state and try to be a part of that process. So that to me is the greatest focus for extraterritoriality and the sort of strongest argument. We see that become of more concern the greater the ramifications are going to be on the technology itself. And so that&#8217;s where I think seeing, for example, state laws that are impacting the fundamental technology itself is depriving people around the country of a voice in the matter of the direction of what can be and what I think is going to be an incredibly transformative technology.</p><p>Matt Perault (29:44)</p><p>I do think it&#8217;s really interesting how the text of the legislation interplays with the technology. So several of the California bills haven&#8217;t had any jurisdictional limitation. So if you said, how do we know that California lawmakers are aiming for a national standard? There&#8217;s nothing in the text that says limited to development in California or residents in California or companies headquartered in California. Nothing related to deployment in California or specific effect on California users. The text is just wide open and at least in its terms, is not limited to anything related to California. That&#8217;s on the text of the legislation side. And then the nature of the technology, and this has been a thing that I&#8217;ve been learning from our technical teams, is moving in a direction of more more remixed models. So a developer is building off of open source software that might be developed in another state. They&#8217;re taking little bits of that. They&#8217;re combining it with other models that might be developed in one place and deployed in another. A deployer might pick up a developer&#8217;s model built elsewhere. And so there are all these cross-border dynamics around AI construction, the development process itself, and then the deployment process as well. And those two things, the legislative text and the nature of how the technology is developed, seem to combine in a way that almost like in bright lights invites a discussion of this doctrine.</p><p>Kevin Frazier (31:01)</p><p>Well, and I think I just, I have to reference one other point because I&#8217;m sure there are some listeners who are saying right now, all right, they&#8217;re fascinating, they&#8217;re good looking, they&#8217;re having a really interesting conversation, whatever, that&#8217;s great. I still have no clarity around what extraterritoriality actually means, what the Commerce Clause means, what the dormant Commerce Clause means. What the heck? This is clearly not justiciable. There&#8217;s no way that a court can sit down and actually try to parse these out.</p><p>And I&#8217;m gonna push back on that by going again into full history mode. There&#8217;s a forgotten thing that a lot of people don&#8217;t pay attention to about James Madison. James Madison proposed giving Congress an absolute veto over state laws. This was an actual proposal to say that any state law that gets passed has to then go through Congress, which will determine not if it&#8217;s constitutional or not but just if it aligns with the national interest, this was a real thing that James Madison, one of the founders we all talk about, actually proposed. The pushback was to say, and this is one of the most hilarious things in the AI debate, Jefferson retorts, hey, bro, this seems a little extreme. States don&#8217;t pass laws that implicate the national interest. This just doesn&#8217;t happen. He made a guess that it was about one out of every 100 bills would actually deal with, passed out of state legislatures would actually deal with the national government. So they said, look, this isn&#8217;t gonna be a huge issue. We&#8217;re going to leave it to the courts to strike down those sorts of laws. And so when we hear justices say, this doesn&#8217;t seem like the sort of issue the Supreme Court should answer. This seems like something that should be left to the political branches.</p><p>Go back and read your Madison, go back and read your Jefferson. The courts were explicitly designed to fill this function and can&#8217;t punt on these issues.</p><p>Jai Ramaswamy (33:04)</p><p>Yeah, that&#8217;s a super interesting observation. And I think it raises this question of how do you think of the appropriate scope of state and federal legislation? To your point, Kevin, we&#8217;ve now been talking in generalities. We have the history. I believe we&#8217;ve convinced all of our listeners about this structure and that it&#8217;s the right thing to do. But now the question becomes, you know, brass tacks.</p><p>It&#8217;s great that you guys are talking about all this history, but at the end of the day, we&#8217;ve got a powerful technology. It&#8217;s got uses, it&#8217;s got misuses. It has great opportunities and great risks. And so it will be regulated in some way, shape or form. And so how do you think about that real distinction between what the federal government has sort of exclusive authority and power over and what the state governments have a purview over and are legitimately regulating?</p><p>Kevin Frazier (34:03)</p><p>&#8230;The distinction I&#8217;ll add is a focus on alternative mechanisms. I&#8217;ll get there in a second. So first, I think all three of us agree that the distinguishing factor between are you governing the deployment of the technology? Right? Are you going back to our car analogy? Are you trying to regulate the engine or are you instead regulating use?</p><p>You know, how does a driver get their license? What speed can you go on certain roads, even designing the roads all left to the state? And so if states want to regulate that end user deployment, how are you going to use AI in what context? What sort of transparency requirements do you want to provide to your customer? For example, what sort of training do you want to provide to doctors, to educators, to lawyers? All of those things do not have a blatant and obvious attempt to regulate what is a national question. And we can very much see that there is also clear limitations on the likelihood of that law exceeding the jurisdiction of that state. Now, I get that there is some use cases where it&#8217;s harder to map on as Matt was noting, there&#8217;s, you know, mixing going on now, there&#8217;s open source models being poured into other open source models, there&#8217;s fine tuning, there&#8217;s all of these things that blur that picture, which is why I think it&#8217;s also important that we start to hold legislators accountable for exploring alternative mechanisms that are not grounded in burdening interstate commerce. </p><p>So I just want to focus explicitly on the mental health issues.I&#8217;ve been outspoken that I experienced mental health issues as a child. I think that AI companions and AI therapists, if properly trained, have an incredible role to play in making sure that kids who otherwise would not receive care. And that&#8217;s important to point out. We have a therapist shortage. It&#8217;s not like if we woke up tomorrow and just said, we want all kids to have therapists, they could suddenly find their therapist. There&#8217;s a huge shortage. Why aren&#8217;t state legislators making it easier for more students, for more children to access human therapists rather than starting to tinker with the technology itself, to me is just an obvious least burdensome or less burdensome alternative to trying to tinker with technology that&#8217;s still developing. And so that&#8217;s the two part framework that I would encourage state legislators to think through. First, use versus development or deployment versus development. And second, what is the actual problem we&#8217;re trying to solve? Because I don&#8217;t think enough state legislators are honest of saying, okay, if the problem is children experiencing mental health issues, then I could list out for you like 50 other things you could do besides banning AI companions that would be more efficacious for actually helping children with those mental health issues.</p><p>Matt Perault (37:08)</p><p>That delineation between use and development is something that we&#8217;ve long focused on and for a variety of different reasons. One is the benefits as you&#8217;re describing from regulating harmful use. Like if you really are concerned about harmful use, you should target it directly. And second is the implications of development for not just innovation, but really for the part of the innovation pipeline that we&#8217;re most focused on, which is startups, Little Tech companies, because if you&#8217;re regulating development, those costs tend to be disproportionately born by startups, it&#8217;s hard. have smaller legal teams or non-existent legal teams, smaller policy teams or non-existent policy teams. And so the administrative regulatory burdens try to end up making it harder for them to compete with larger platforms. think the funny thing about that policy framework and then what we&#8217;re focused on today is that the state patchwork is also really hard for Little Tech, right? So when we&#8217;re talking about state versus federal rules, we&#8217;re not just talking about sort of an abstract constitutional principle from our standpoint. The reason that this is something that we have care so much about is that a startup without a legal team trying to figure out how do I offer one tool in California and a different tool in New York and a different tool in Florida is going to be really burdened and be harder for them to compete with large tech companies.</p><p>Jai Ramaswamy (38:18)</p><p>Yeah. And I think I was going to say that the other thing, you know, we&#8217;re, we&#8217;re sort of mentioning, but I think it&#8217;s worth just being explicit about it. The thing we&#8217;re most worried about at, at the state level is, is simply model level regulation. And the reason for that is, I think pretty simple, which is at, at the core, these models, when I say these models, I&#8217;m talking about the newest version of, of generative AI, the, the LLMs that everybody&#8217;s focused on now. But other models as well are really just math and statistics at the end of the day, right? It&#8217;s like taking a bunch of data and using statistical methods, using vector math to slice and dice the data into different categories and then make sense of that data. That&#8217;s really what these AI models at core are doing. And to do that, we call it training. You just train on a bunch of the data, the models, are developed and then they have a bunch of data that passes through it that continually makes those models better. And I think our big concern is that maybe there&#8217;s a world in which even a big company could train on for 50 different standards, right? So I train my data in Kentucky and then I train it in California to their standards. But really that isn&#8217;t feasible. The training takes months and months. Takes you know, as we are seeing enormous costs in terms of computing, in terms of money. And so it&#8217;s not feasible to have models trained to different standards in different states. And so de facto, it ends up being a national standard when a state declares that they&#8217;re going to regulate at the model level and they&#8217;re going to regulate effectively how these models are trained. And that&#8217;s our biggest concern is that there isn&#8217;t a good way to do state level model regulation.</p><p>That feels like the kind of thing that is so inherently federal in nature. And yes, it will definitely harm startups because even if somebody with the adequate resources could do this, which I actually am skeptical of, but even if they could, a startup certainly can&#8217;t do it. And so you&#8217;re destroying any hope of competition in this realm, of creating a national market where small and big players can compete on an effective playing, you know, on a level playing field.</p><p>Kevin Frazier (40:42)</p><p>And I&#8217;ll note that Representative Liu, a Democrat from California, made that exact point. Everyone can go find it on C-SPAN. Directly asking a witness, hey, tell me, do you think a lab can comply with even two different state training requirements? And the witness kind of shrugged and looked another way and said, but clearly if Representative Liu is aware of this, a very AI savvy person, this is not a matter of necessarily politics or it shouldn&#8217;t be to your point Jai. This is a technical question and I&#8217;ll raise also that outside of just the sheer economic questions of this to me this is truly a matter of national consequences when we think about pushing the frontier of AI. This isn&#8217;t just for fun in terms of I can&#8217;t wait to see if the US beats China on the next benchmark. These are questions of profound significance to voters right now. If you go and ask a voter, what is the most important thing you&#8217;re looking for in 2026? It&#8217;s affordability. And what is the thing that could drive down the cost of healthcare? AI. What is the thing that can drive down the cost of education? AI. What&#8217;s the thing that can start to improve our transportation systems, our energy systems? I can keep going, but we only have so much time. AI really can be the biggest driver of a lot of human flourishing across so many domains. So for any state to interfere with our actual progression along that effort to lead on the AI frontier is fundamentally at odds with that sort of national unity that the founders aspired for.</p><p>Matt Perault (42:19)</p><p>Kevin, you were in DC a couple of months ago testifying at a hearing called AI at a Crossroads, a Nationwide Strategy or Californication. I&#8217;m curious what you heard from lawmakers in DC about how they&#8217;re thinking about this issue and how the state dynamics are affecting.</p><p>Kevin Frazier (42:35)</p><p>Yeah, so the committee was really engaged in a way that I think the rest of the panelists didn&#8217;t necessarily anticipate, myself included, of hearing that lawmakers really were hungry for strong information, reliable information on this very question of what is the proper role of the states, what&#8217;s the proper role of the federal government, and how do we map that on to that technology themselves. So one of the biggest takeaways for me was so much of the AI policy discourse right now is grounded in vibes and what people heard at some point, at some time from some person, they&#8217;ve anchored on that position. So for example, we had the ranking member of that committee was very worried that the moratorium and anyone loosely affiliated with being supportive of the moratorium wanted to eliminate the ability of states to continue to enforce the common law and to allow for courts to enforce things and adjudicate things like torts, negligence, just basic liability frameworks, which was never on the table. But to hear those sorts of talking points emerge months later during a committee meeting on AI was indicative of the fact that a lot of this discourse is just being swayed by whatever podcast you listen to, whoever was in your office last and perhaps how you felt about the technology even way back in 2023. </p><p>Now, I&#8217;ll also say that I was impressed by the rigor of the questions that committee members were asking. To me, it really showed that they were hungry for information. And that&#8217;s suggestive of the fact that we need more even keel, thorough objective analysis of AI and of the relevant laws so that lawmakers can make informed decisions. Because I think that the failure to appreciate, for example, what it means to interfere with development and the ramifications that that can have on national interests, that just hasn&#8217;t been shared thoroughly enough with a lot of lawmakers who may otherwise be skeptical of reserving that power to the federal government. Now, one of the biggest takeaways, of course, from that conversation was even among folks who may be somewhat willing to consider leaving this issue up to Congress and leaving this to the national government, some feeling that there has to be an affirmative policy response to deal with this AI question. So one of the biggest pushbacks on the moratorium was that it was banning entirely state AI legislation, which again, as we discussed, wasn&#8217;t the case, and replacing it with nothing.</p><p>And there is a real sense, ask any behavioral economist, we all suffer from action bias. We just want to see something get done. Something that is happening, even if it&#8217;s bad, is often perceived as being better than doing nothing, just because we feel like we have to respond in some fashion. And lawmakers are hungry for what that should be. And I would love to talk with you all about that ad nauseum for a future podcast down the road, but minimally, I think that there&#8217;s so much we can be doing on how to increase AI adoption. I would love to see a 21st century version of the rural electrification administration, which went county by county, teaching people how to use electricity. Why don&#8217;t we have an office of AI adoption? Right? Let&#8217;s go send our startups, our researchers into communities and help people learn how to use AI.</p><p>We also need to be attentive to the very real concerns about economic displacement. If people feel like they&#8217;re going to live in a future in which they don&#8217;t have a job, they don&#8217;t know how they&#8217;re going to provide for their family, it&#8217;s hard to say, yeah, I&#8217;m really pro-AI. I can&#8217;t wait to lose my job. No one has that button. That bumper sticker will not be seen on any license plates. So the federal government has a chance to rethink how can we make benefits, for example, more flexible? How can we help people transition to new opportunities to start those new AI companies? Engine.org is doing fantastic work in this space, recommending, for example, how we can redefine the definition of investor just to make sure that more people can access startup funds and things like that. It was a great experience. I would, of course, welcome the call again, although I think after the fact, they realized they invited the wrong Kevin Frazier. If you Google me, you&#8217;ll find that the more popular Kevin Frazier from entertainment tonight, he would have been a way better witness, way more exciting.</p><p>Matt Perault (47:17)</p><p>Kevin, you mentioned ramifications that are on the minds of lawmakers in DC. Jai, that&#8217;s something that we&#8217;ve talked about a lot at our firm, the ramifications, not just in terms of California versus Florida or New York versus Texas, but ramifications beyond the borders of the United States. So what is important about getting this issue right for what happens outside the United States?</p><p>Jai Ramaswamy (47:38)</p><p>Yeah, and I&#8217;ll turn to that for a second, but something Kevin said really resonated with me and then I&#8217;ll go to the question you posed, Matt. But the thing that resonated with me was that, and I don&#8217;t know if the policy is the right one in terms of rural electrification, but the notion that this is really game changing for any society is, I think, underappreciated. And the way I think about it is, if you think about what AI really is, it&#8217;s commodified intelligence. It&#8217;s commodified and commoditized inference. And if you look around the world at the lack of opportunity that people have, the issue is not that there&#8217;s too much inference and intelligence. It&#8217;s the lack of access to the tools that allow you to develop that intelligence. And so I don&#8217;t see a world in which commoditized inference and intelligence is a bad thing. Like I struggle to understand what that world would be. It&#8217;s not that it won&#8217;t be misused, et cetera, but at core, I think it&#8217;s really getting people to understand that this is what it is. It&#8217;s a tool for them. It&#8217;s sort of Steve Jobs&#8217; bicycle of the mind on steroids, right? That&#8217;s what this is. And that&#8217;s really the potential that&#8217;s here. And that morphs into the broader conversation, Matt, that you were talking about, which is this.</p><p>Technology really is, and I think this is the way we talk to lawmakers about this, it really is a shift. It&#8217;s more than an industry, it&#8217;s more than a new technology, it&#8217;s a change in the way that the control layer of computing operates. It operates through natural language, so we&#8217;re talking to computers now rather than coding. I mean, there is coding obviously as well, but with the rise of LLMs, it&#8217;s very clear that the way that most people will now interact with computers is by using their own native language. And the way that computers will respond to us will not be sort of deterministic, robotic. You put in A and you always get in B. But like human beings respond to us, the responses are contextual. They depend on the way that the questions are asked. And putting in input A doesn&#8217;t always get you output B. It gets you output B, C, D. And so on many levels, you&#8217;re talking about a new control layer of computing that puts us in or makes us interact with computers in a profoundly different way that we&#8217;ve been used to. The implications of this are huge because these systems do have sort of values of openness or closedness, for lack of better word, built into them in the same way that the internet could have developed very differently if it were an open system versus it were to close system that many advocated for in other countries and continue to advocate. And I think that that&#8217;s the geopolitical thing that the US and other countries will struggle with, which is that the countries that have companies that produce this AI that others adopt and that perform that sort of control, the function of computing will have a massive benefit from these outsized to others. It&#8217;s why China is investing a ton of money effort in computing, in software, also in hardware to try to be that control layer. And so I don&#8217;t think we should diminish at all what the stakes are here, which is what type of computing do we have in the future? And a computing platform based on technology that&#8217;s informed by the values of an authoritarian government are going to be very, different than those informed by in a sense the software of open societies, and I say societies because it&#8217;s the US, but it&#8217;s also others within sort of that framework that are developing these technologies. And I think we have to be prepared to have the conversation that at this point, you know, one of the most popular models being developed on is Qwen is the Alibaba model.</p><p>You know, we&#8217;re in a neck-and-neck race whether we like it or not between technology that&#8217;s being produced by by companies in the US and its allies versus companies that are coming out of China and the implications I think from a geopolitical point are huge it in the same way the internet impacted cultural exchange, soft power as well as hard power, that same dynamic gonna play out on steroids I think here and so I think that&#8217;s the the underlying thing, Matt, that you were asking about that I think is really at the forefront of everybody&#8217;s minds in DC as they think about this. And the way that this debate plays into that is it&#8217;s going to be very hard for the US to compete if the companies that are founded here have to comply with 50 state laws for model development. It&#8217;s just hard to imagine that that ecosystem is going to present and produce something that can compete with the likes of the massive efforts that are going on in China. So it has to be a national effort. There isn&#8217;t really a choice, I think, once you bring geopolitics into it.</p><p>Matt Perault (52:48)</p><p>Jai, thanks for bringing it up full circle. Kevin, Jai, thanks. This was fun.</p><p>Kevin Frazier (52:53)</p><p>Thanks for having us.</p><p>Jai Ramaswamy (52:55)</p><p>Thanks, Kevin. It&#8217;s been a pleasure.</p><div><hr></div><p><em>This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein.  Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at  a16z.com/disclosures. You&#8217;re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.</em> </p>]]></content:encoded></item></channel></rss>