Yesterday, two of TechFreedom’s policy experts delivered remarks at the FTC’s May Open Commission Meeting. Their remarks are presented here, lightly edited for clarity.

Remarks of Berin Szóka, President of TechFreedom:

I’m Berin Szóka, President of TechFreedom. 

The Commission recently proposed to extend its new Trade Regulation Rule on Impersonation of Government and Businesses to include “parties who provide goods and services with . . . reason to know that those goods or services will be used in” unlawful impersonations. 

But essentially all AI developers have “reason to know” that generative AI can be used for illegal impersonations: criminal scams that use generative AI are increasingly common and are highly publicized. So a constructive knowledge standard effectively means that all developers and distributors that offer generative AI services to the public would be subject to the kind of “Know Your Customer” obligations borne by financial institutions. A higher standard is necessary. 

That standard must be consistent with how courts have interpreted Section 230(c)(1). In Accusearch, a company made a “material contribution” to the development of unlawful content because it “knowingly sought to transform virtually unknown information into a publicly available commodity.” No court has denied Section 230(c)(1) protection under some lower knowledge standard. 

Any “means and instrumentalities” rule should be modeled on the willful blindness standard of the Telemarketing Sales Rule: a developer must either know, or consciously avoid knowing, not merely that someone might misuse its tool, but that the particular party to which it provides an AI tool will use it to violate the rule. 

The Commission should clearly state the practical bottom line of this standard by adding an additional proviso, as a16z proposes: “Nothing in this section shall be interpreted to require a provider of goods or services to conduct prior due diligence on any or all parties that may use the goods or services.” Such disclaimers against general monitoring obligations are commonly used to ensure that intermediary liability laws do not cast too long a shadow over legitimate operators.

Setting the right knowledge standard is key. The proposed standard would be impossible for all but the largest, best-financed companies to bear—and crushing to all other developers. 

Indeed, a constructive knowledge standard could drive the development of AI technologies out of the United States entirely, which would only aggravate impersonation fraud. 

Remarks of Andy Jung, Associate Counsel at TechFreedom:

I’m Andy Jung, Associate Counsel at TechFreedom. 

There is an ongoing debate over the extent to which Section 230(c)(1) applies to AI. Currently, the debate focuses on text-based AI tools like ChatGPT. AI technology, however, has already advanced far beyond chatbots. Law review articles simply cannot keep up with robots. 

The proposed impersonation rule would terminate the Section 230 debate before it has even really begun—by extending liability to AI platforms used to generate infringing content. The Commission, however, has not sought public comment on the Section 230 question. 

Ultimately, whether courts extend Section 230(c)(1) immunity to AI will likely depend on how the specific product or tool at issue functions. For large language models like ChatGPT, the question will likely turn on whether an AI’s output is an algorithmic augmentation of third-party information, or whether the tool was responsible, in whole or part, for creating or developing the generated output. Probably, the former will receive immunity. 

By imposing broad liability on AI developers, the proposed rule presumes that Section 230(c)(1) does not apply to generative AI at all. This presumption can have no basis in the record because the Commission has not sought public comment. The entire “Means and Instrumentalities” provision of the proposed rule glosses over complex questions of fact regarding how particular AI systems work and the nature of their outputs. 

During the recent Mag-Moss hearing on the Negative Option Rule, the presiding officer found that the Commission had overlooked several issues of material fact in that rulemaking. Here, the Commission is headed down the same path on AI and Section 230.

</>