Yesterday, TechFreedom was joined by several distinguished scholars of First Amendment, Internet, and technology law in a letter expressing serious concerns about the NO FAKES Act. The bill’s takedown requirements would make unilateral removal of content far too easy while leaving speakers no recourse besides costly litigation—and no recourse at all if their speech involves an unauthorized replica but is protected by the First Amendment.
“The bill’s notice-and-takedown system is prone to abuse,” said Santana Boulton, TechFreedom Legal Fellow. “This bill would force online service, such as image hosting sites or social media, to remove content after a single report without adequate safeguards and provisions for restoring inappropriately-removed content—even if it later concludes that the content isn’t an unauthorized deepfake. By placing the burden of litigation to restore content on speakers, who often will not be able to bear the burden and expense of litigation, the NO FAKES Act unacceptably chills free expression and incentivizes bad actors to target critical but legitimate content without risk.”
“The bill’s exceptions are too narrow and inconsistent with free speech law,” Boulton continued. “NO FAKES lays out exclusions under Section 2(C)(4), including a savings clause under 2(C)(4)(A)(ii)(II) and 2(C)(4)(A)(iii)—but savings clauses rarely save laws that infringe on the First Amendment. Speakers who have content removed under this bill would likely assume that their content wasn’t protected speech at all—after all, if it was, the platform would be exempt from liability for hosting it—and avoid speaking in similar ways in the future. Those being regulated here are speakers, not scholars of First Amendment law. Individual speakers typically don’t have the resources to go to court and prove their speech falls under an exclusion.”
“Even well-intentioned laws can be weaponized by those who seek to suppress free speech,” Boulton concluded. “This bill can too easily be weaponized against legitimate, constitutionally protected content and leave creators with no recourse.”
###
Find this letter and release on our website and share it on Twitter and Bluesky. We can be reached for comment at media@techfreedom.org. Read our related work, including:
- Reply comments to the FCC for their NPRM on the use of AI-generated content in political advertising (Oct 11, 2024)
- Comments to the FCC regarding on their NPRMon the use of AI-generated content in political advertising (Sep 19, 2024)
- Our testimony before the U.S. Senate on AI and the future of our elections, (Sep. 27, 2023)
- Comments to the NIST on the draft guidelines and best practices for AI safety and security (Sep. 9, 2024)
- California’s AI Bill Threatens To Derail Open-Source Innovation, Reason (Aug. 13, 2024)
- Public-private partnerships key to AI growth in OC, Orange County Register (July 15, 2024)
- Our letter on the “Protect Elections from Deceptive AI Act” (May 14, 2024)
- Orange County’s untapped AI potential, Orange County Register (Apr. 16, 2024)
- ‘Unregulated AI’ is a myth, Orange County Register (Apr. 1, 2024)
- Startlingly New, City Journal (Mar. 7, 2024)
- How should we regulate generative AI in political ads?, UNC Center on Technology Policy (Feb. 5, 2024)
- A.I. Panic is Causing First Amendment Hallucinations … in Humans, Substack (Jan. 29, 2024)
- Letter on the “No Section 230 Immunity for AI Act”, (Dec. 11, 2023)
- Generative AI and The Future of Speech Online, Center for Democracy & Technology (Oct. 5, 2023)
- AI and the Nature of Literary Creativity, The Bulwark (Sep. 27, 2023)
About TechFreedom: TechFreedom is a nonprofit, nonpartisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.