This Week, TechFreedom filed comments to the Office of Science and Technology Policy (OSTP) in response to their request for information (RFI) in identifying existing Federal regulatory structures that unnecessarily hinder the development, deployment, and adoption of artificial intelligence (AI) technologies within the United States. The comments seek to clarify the FCC and FTC’s roles in regulating AI while advocating for a federal regulatory sandbox for the technology.  

“The FCC lacks the statutory authority to preempt state AI laws,” said James E. Dustan, TechFreedom’s Senior Counsel. “The AI Action Plan envisions an outsized role for the Federal Communications Commission (FCC) in the future regulatory regime for AI, directing the Commission to ‘evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.’ But efforts to preempt state AI laws that do not directly affect communications networks are unlikely to be upheld by the courts. Preemption of counterproductive state AI laws is necessary, but that must come from Congress, not the FCC.”

“The FTC should provide clearer guidance on unfair or deceptive AI acts or practices,” said Andy Jung, TechFreedom’s Associate Counsel. “The FTC frequently refers to its body of settlements regarding new technologies as its ‘common law,’ but there is an essential difference between this approach and real common law: the involvement of courts in working through questions of doctrine. Because the FTC has settled nearly every tech-related case it has brought, and every AI-related case, it’s hard to know with any certainty how its consumer protection authority applies to AI. The Commission should issue guidance, perhaps a Policy Statement on Unfair or Deceptive AI Acts or Practices, to clarify the fine line between legal and illegal AI use cases. This would implement the AI Action Plan’s directive to avoid ‘theories of liability that unduly burden AI innovation.’”

“The federal government should create a regulatory sandbox for AI,” Jung concluded. “Regulatory sandboxes allow AI developers and users leeway to experiment with technology while retaining safeguards, such as consumer protection laws, to protect the public. Sandboxes encourage experimentation and innovation by reducing legal uncertainty while allowing regulators to gather real-world data and mitigate risk.”

###

Find these comments on our website, and share them on X (formerly Twitter) and Bluesky. We can be reached for comment at media@techfreedom.org. Read our related work, including:

About TechFreedom:

TechFreedom is a nonprofit, nonpartisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.

</>