0:00
/
0:00
Transcript

AI and the First Amendment

Part Three of an AI Policy Legal Primer

Welcome to part three of our AI Policy Legal Primer. In this series, we ask a panel of leading appellate lawyers to help define the constitutional boundaries shaping how federal and state governments can govern AI. If you missed earlier installments, check out Preemption, Explained and The Dormant Commerce Clause, Explained.


As lawmakers consider requiring companies to make disclosures about their AI models—such as risk reports, impact assessments, or content warnings—questions arise about whether those mandates could run afoul of the First Amendment.

The First Amendment protects against both restrictions on speech and laws that compel speech. For certain types of speech, courts generally permit factual, noncontroversial disclosures, but are more likely to find a disclosure mandate to be unconstitutional if it compels expression on contested topics or forces companies to adopt messages they would not otherwise share. These types of mandates may also be found to be unconstitutional if they create undue burdens, such as by requiring companies to bear costs that make it harder for them to compete with their larger competitors.

In part three of our AI Policy Legal Primer, leading appellate lawyers Allon Kedem, Paul Mezzina, and William Jay join Matt Perault, head of AI policy at a16z, to explore how these principles apply to AI. They discuss recent disclosure laws, the line between constitutional and unconstitutional compelled speech, and emerging questions about whether model developers’ design choices could themselves count as expressive acts protected under the First Amendment.

Understanding these constitutional limits will be essential for lawmakers seeking to craft durable, lawful frameworks for AI governance that enables small developers to compete on a level playing field with larger companies.

This transcript has been edited lightly for readability.

Matt Perault (00:00)

We’ve talked a lot about dormant Commerce Clause analysis. I’d love to get your take briefly in the time we have left on another potential concern that might be raised by some of these state laws. A lot of them have disclosure related provisions as a part of them in some form. And so that raises possible First Amendment considerations.

Paul, starting with you, can you walk us through that concept? Why would the First Amendment apply to a situation when the government’s mandating certain disclosures?

Paul Mezzina (00:29)

Sure. So the First Amendment obviously protects free speech [and it prevents] laws that restrict speech, but it also prevents laws that compel speech. So laws that compel people to engage in speech they would not otherwise want to engage in always trigger some level of First Amendment scrutiny. How much scrutiny depends on what kind of speech is being compelled and in what context.

There’s a lot of nuance to this, but at a high level, the way courts think about it is if the compelled speech is purely factual and non-controversial, then it gets a much lower level of scrutiny and it’s generally going to survive under the First Amendment. So you think about all kinds of factual labeling requirements, nutritional labels on food, things like that. But beyond that, if the speech that’s being compelled by the government goes beyond the purely factual and non-controversial, it gets a higher level of scrutiny and those kinds of compelled requirements often end up being struck down. So there have been a number of cases recently, almost all of them out of California, at least the ones I’m thinking of, where the state has required companies in different industries to prepare different kinds of reports.

There was a law, for example, that required social media companies to prepare reports discussing risks to children from their services and their data practices. There was a law requiring drug companies to prepare reports that explained their pricing decisions. And there was also a law that actually was partially enjoined just today by the Ninth Circuit that required companies to prepare reports about climate-related risks from their operations. So in all of these contexts, parties have challenged the laws and said, you, the state are requiring us by requiring these reports, you’re forcing us to engage in speech that we don’t want to engage in. And it’s not just purely factual and non-controversial. It’s actually requiring us to express opinions on some pretty controversial subjects. Some of those laws have been struck down. Some have survived in the Ninth Circuit. Some are still being challenged or likely to be challenged in the Supreme Court. So this is very much a developing area of the law.

Matt Perault (02:39)

And could we look a little at some, how the doctrine might map onto some of the things that we’re seeing in AI?

So we’re seeing, I think, a few different types of things.

One would be that the company conducts impact assessments or does those kinds of assessments of its own potential security risks.

Another would be making disclosures to the government. So that could be things like certain kinds of content that are certain kinds of prompts that you receive or certain kinds of content that you produce could also be requiring a company to say something to the government about safety practices.

The third bucket would be warning labels, things that say things like this content is generated using AI, or things that say, warning, you’ve been using the service for X amount of time. need to now disclose that a provider needs to disclose to a user that it’s been using a service for a specific period of time.

So Willy, how would a court evaluate those various different kinds of transparency requirements under the First Amendment?

Willy Jay (03:36)

Yeah, I’ll add actually just one more, which would be actually saying that anything that children can access, know, the AI has to generate, whether it’s images or text or whatever, with the possible possibility that minors are reading it high in mind.

[Some] of the examples that you gave, like the simplest examples, are more like a government warning label, like the requirement that you tell people you’ve been using the AI for this amount of time. And just like there’s a Surgeon General’s warning on cigarettes, which have been challenged over the years as compelled speech and various graphic pictures of diseased lungs have been struck down, but factual statements about smoking being hazardous to your health are still on the cigarette pack.

So I think that the type of government warning or disclosure is likely to be the weakest type of First Amendment challenge. Things that basically don’t require the AI company or the model itself to mouth things that they might disagree with. So it’s clearly a message from the government and it’s clearly something that is not kind of hijacking the message or content generated by the AI itself.

Matt Perault (05:00)

And Allon, maybe for you, what about the kinds of things the government might require around disclosing information about safety practices?

Allon Kedem (05:08)

So those were some of the laws out of California that Paul was talking about earlier, where the intention was to force social media companies and others in that area to articulate what their own policies were on harm and restricting speech and the type of threats to children or others, certain types of hate speech. And one of the reasons that those laws failed First Amendment scrutiny is that just trying to come up with a clear definition of what counts as harmful speech or hate speech is usually not a neutral endeavor. And so actually the very fact of forcing a company to sort of articulate its terms in the language that the government identified was itself a change of their message.

And then you’ve got sort of even more extreme cases like the Supreme Court case out of Florida and Texas where those companies were passing laws that were trying to essentially alter what speech the social media companies would make available and push to their users. So if you used a particular type of algorithm that screened out speech that seemed like it was hate speech, that might run afoul of the Texas or Florida law. And what the Supreme Court held is that those companies had a First Amendment right or at least in that posture, they could make an argument that they had a First Amendment right to sort of choose for themselves what sort of censorship to engage in, what speech to allow users to put on the platform, and what speech not to.

There was a really interesting concurrence by Justice Barrett who said, in this instance, it seems fairly clear that the companies have made a very deliberate choice, which the First Amendment protects, not to allow certain forms of speech which they’ve decided is harmful, but you could have an instance in which artificial intelligence is used essentially to give the user whatever it is that it wants. And it’s not totally clear, she said, in that instance that there would be any sort of decision, speech-based decision, content-based decision that the First Amendment would protect, because it would essentially be just a sort of automatic consequence of an algorithm that was generated, not for First Amendment reasons, but just for essentially, you know, consumer preference reasons.

Matt Perault (07:33)

Paul, how do you think about that last point that Allon made? Like what [does] editorial discretion look like in an AI context?

Paul Mezzina (07:40)

Yeah, I think it’s really interesting. Justice Barrett’s opinion raises some really interesting questions. [I think] she’s right to say that what the First Amendment ultimately protects is human expression and expressive choices where people are trying to make decisions about the ideas they want to communicate. But where you locate that expression and whether you can have expression in the form of an algorithm is a really interesting question.

Justice Barrett sort of suggests, and this is an argument we’ve seen plaintiffs make in litigation over social media, that if a platform is using an algorithm to try to serve content that the users want, that that is somehow not expressive. And I wonder if they would apply that same thinking to sort of more rudimentary algorithms that companies have used for a long time.

Think about something like a radio station that plays the top 40, right? The producers at that radio station are not going out and choosing their favorite music to play to listeners. They’re using top 40 is an algorithm. It’s not a very complicated algorithm, but it’s an algorithm that looks at what music is popular and tries to serve that music to listeners. Now, if the government came into a top 40 station and said, we’re going to require you to play a certain amount of classical music every hour. And the reason we’re allowed to do that is because we don’t think you’re engaged in expression at all. You’re just using your top 40 algorithm. I think courts would probably have a problem with that. I think they would you know very well might say that the decision to play top 40 music is itself an expressive choice.

[In terms] of how that maps on to AI, courts when they confront new technology like to reason by analogy. They like to try to figure out what things that I’m familiar with that courts have dealt with before, does this look and feel like? What is it similar to in relevant ways? And I think developers who want to claim protection for certain aspects of model development are going to want to draw these analogies where they say, this is a conduit for my speech. Decisions that I am making in model development are expressive choices where I’m trying to influence or control the speech that is output by this model.

And on the other hand, the argument on the other side is going to be, there’s no real expressive choice happening. This is all basically the construction of a machine. And I think that’s going to be the challenge for developers is to articulate how there are expressive choices being made as part of the development process.

Allon Kedem (10:19)

One of the sort of interesting features of certain large language models is that when you ask the people who develop these models how it is that the model came to think that writing a particular response in a particular way made sense, they will often say, look, it’s a black box. We don’t actually know what it is that the model used in order to decide that this word comes most appropriately after that word.

And so in some sense, you might say, well, you’re not really making a choice that the First Amendment should respect. On the other hand, they also do usually impose constraints, like they won’t ⁓ give you an answer that violates copyright law. They’re not going to tell you the lyrics of a song, or they’re not gonna tell you the first or last page of a book just because you ask it to. And that’s a deliberate choice that’s made in the product.

Matt Perault (11:09)

This is a helpful set of tools that we can use in evaluating all the legislation we’re likely to see ahead at the federal and state level.

Allon, Paul, Willy, thank you so much.

Paul Mezzina (11:19)

It’s a pleasure, thank you.


This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.

Discussion about this video

User's avatar

Ready for more?