Yesterday, TechFreedom filed comments in response to the Federal Trade Commission’s (FTC) request for public comment on their proposed amendments to its Trade Regulation Rule on Impersonation of Government and Businesses. The rule would hold AI developers responsible for providing “means and instrumentalities” with “reason to know” that services could be used in unlawful impersonations of ordinary individuals.
“Holding developers of AI systems liable for the impersonations created by users of their systems is barred by Section 230 unless courts find that the developers have materially contributed to the unlawfulness of such content,” said Andy Jung, Associate Counsel at TechFreedom. “There’s an ongoing debate among legal scholars over the extent to which Section 230(c)(1) applies to AI. The current debate focuses on text-based generative AI chatbots like ChatGPT. AI technology, however, has already advanced far beyond chatbots and is outpacing any legal consensus on the topic. Law review articles simply cannot keep up with robots. The FTC ignored this debate, failing even to ask whether Section 230 might apply. The answer likely depends on how each AI system works—whether it is an algorithmic augmentation of third-party information, like a search engine. That question requires further public comment or public hearings.”
“The FTC’s proposed rule would strangle AI development by holding AI systems responsible for any misuse they should have known about,” Jung concluded. “At most, the FTC should hold AI developers responsible only when they have actual knowledge of impersonation fraud, or consciously avoid such knowledge—just as required by the FTC’s telemarketing fraud rule. This standard would allow the FTC to punish truly bad actors while protecting developers of general-purpose AI tools whose usage is overwhelmingly lawful. The FTC should make clear that AI developers have no general monitoring obligation—because such liability would be impossible for all but the largest, best-financed companies to bear and crushing to all other developers. In essence, this could drive the development of AI technologies out of the United States entirely.”
###
Find these comments on our website, and share them on Twitter, Bluesky, LinkedIn, Facebook, and Mastodon. We can be reached for comment at media@techfreedom.org. Read our related work, including:
- Our public remarks on AI and Section 230 at the FTC’s March Open Commission Meeting (Mar. 21, 2024)
- Our letter to the Senate Judiciary Committee for the proposed “No Section 230 Immunity for AI Act” (Dec. 11, 2023)
- OA753: Gonzalez v. Google: The Case That (Didn’t) Break the Internet, Opening Arguments Podcast (Jun. 1, 2023)
- Our press release on the Twitter v. Taamneh and Gonzalez v. Google rulings (May 18, 2023)
- Section 230 Spring Summit, American Enterprise Institute (Apr. 19, 2023)
- Four Things to Watch in Gonzalez v. Google, FedSoc Blog (Mar. 17, 2023)
- Tech Policy Podcast #340: Making Sense of the SCOTUS Internet Speech Cases (Mar. 17, 2023)
- Don’t Repeal the Law That Created the Internet, Ripon Society (Feb. 23, 2023)
- Tech Policy Podcast #338: Gonzalez v. Google (Feb. 14, 2023)
- Our amicus brief in Gonzalez v. Google, U.S. Supreme Court (Jan. 18, 2023)
- Section 230 Heads to the Supreme Court, Reason (Nov. 4, 2022)
- Tech PolicyPodcast #331: Section 230’s Long Path to SCOTUS (Oct. 31, 2022)
- Our letter on Section 230 and the American Innovation and Choice Online Act (June 27, 2022)
- What Is Section 230 and How Is It Different Than the First Amendment?, Foundation for Economic Education (May 27, 2022)
About TechFreedom:
TechFreedom is a nonprofit, nonpartisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.