Yesterday, TechFreedom was joined by a broad coalition of free speech, civil liberties, and tech policy organizations to express concerns about the “No Section 230 Immunity for AI Act” (S. 1993) in advance of Senator Josh Hawley’s planned “hotline” of the bill. As the coalition letter explains, S. 1993 oversimplifies a complex policy issue and poses a threat to free expression, content moderation, and technological innovation.

Joining TechFreedom in the letter were the American Civil Liberties Union, Americans for Prosperity, the Association of Research libraries, the Center for Democracy and Technology, Chamber of Progress, the Competitive Enterprise Institute, the Copia Institute, the Electronic Frontier Foundation, the Foundation for Individual Rights and Expression, the R Street Institute, the Software & Information Industry Association, and the Taxpayers Protection Alliance.

S. 1993 would gut Section 230, allowing lawsuits and criminal prosecutions against social media platforms, AI companies, or any other online service if the “underlying conduct involves the use or provision” of generative AI.

“S. 1993 draws a line with a backhoe rather than a scalpel, forsaking thoughtful and nuanced policymaking for the illusory comfort of ‘doing something,’” said Ari Cohn, Free Speech Counsel at TechFreedom. “Generative AI is now pervasive, in ways that this bill’s authors do not appear to comprehend. Predictive text, search query suggestions, and even content recommendation and moderation tools increasingly deploy what this bill would define as generative AI. Any platform that doesn’t want to face liability for everything on its service would have to revert entirely to pre-AI technology, reversing America’s position as a technological innovator.”

Notably, S. 1993 would also allow claims arising under state law, which Section 230 purposefully preempts. “Section 230’s purpose was to create a uniform body of liability law for the Internet,” Cohn continued. “By undoing this whenever generative AI is remotely related to content, S. 1993 gives state legislatures—not always the most friendly to free speech—a weapon to aim at content they don’t like, whether it be LGBTQ content, abortion-related content, or ‘hate speech.’ Would-be censors across the country are already trying to restrict free expression online, and this bill will only embolden them.”

“Generative AI is a tool like any other, and no safeguards can possibly prevent its misuse by malicious actors,” Cohn concluded. “But exposing the tool makers or providers to liability for intentional bad acts that others set out to do is a cure worse than the disease. It actually incentivizes the use of generative AI for nefarious ends, as companies are a more attractive target for legal action than those who misuse their services. And it makes it more difficult for online services to find and remove offending content—because using advanced AI content moderation tools only further exposes them to liability. This bill is not a serious, targeted attempt at fixing any particular problem with careful legislation; it is a scattergun that will ultimately do incalculable damage to the online speech ecosystem.”

###

Find this letter and release on our website, and share it on Twitter, Bluesky, Mastodon, Facebook, and LinkedIn. We can be reached for comment at media@techfreedom.org. Read our related work, including:

About TechFreedom:

TechFreedom is a nonprofit, nonpartisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology. 

</>