Yesterday, TechFreedom filed comments on the draft guidelines and best practices for AI safety and security issued by the National Institute of Standards and Technology (NIST), in response to Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Also pursuant to the Executive Order, the National Telecommunications and Information Administration (NTIA) released a report on the risks, benefits, implications, and suitable policies for open-source AI models.
“NIST should adopt and apply the marginal risk and benefit analysis framework used by NTIA to assess the overall risk of open-source models,” advised Andy Jung, Associate Counsel at TechFreedom. “But NIST’s draft guidance ignores the benefits of open source altogether. The main provisions direct open-source developers to implement safeguards against misuse of their public models regardless of comparable risks posed by other digital technologies, like closed-source models, or the overall benefit of open source. And many of NIST’s safeguards would hinder or prevent open sourcing of AI models altogether.”
“NIST and NTIA are sending mixed messages to open-source developers,” concluded Jung. “By failing to apply marginal risk and benefit analysis to open source AI development, NIST’s draft guidance targets open-source models with restrictions that are unduly stricter than alternative systems that pose a similar balance of benefits and risks. At the least, NIST should reissue the current guidance with amendments clarifying which practices apply to open source versus closed, using the NTIA framework to explain the particular mitigations and safeguards recommended for open models.”
###
Find these comments and release on our website, and share them on Twitter, Bluesky, Mastodon, Facebook, and LinkedIn. We can be reached for comment at media@techfreedom.org. Read our related work, including:
- California’s AI Bill Threatens To Derail Open-Source Innovation, Reason (Aug. 13, 2024)
- Public-private partnerships key to AI growth in OC, Orange County Register (July 15, 2024)
- Our letter on the “Protect Elections from Deceptive AI Act” (May 14, 2024)
- Orange County’s untapped AI potential, Orange County Register (Apr. 16, 2024)
- ‘Unregulated AI’ is a myth, Orange County Register (Apr. 1, 2024)
- Startlingly New, City Journal (Mar. 7, 2024)
- How should we regulate generative AI in political ads?, UNC Center on Technology Policy (Feb. 5, 2024)
- A.I. Panic is Causing First Amendment Hallucinations … in Humans, Substack (Jan. 29, 2024)
- Our letter on the “No Section 230 Immunity for AI Act”, (Dec. 11, 2023)
- Generative AI and The Future of Speech Online, Center for Democracy & Technology (Oct. 5, 2023)
- Our testimony before the U.S. Senate on AI and the future of our elections, (Sep. 27, 2023)
- AI and the Nature of Literary Creativity, The Bulwark (Sep. 27, 2023)
- The FTC’s AI Moment, Substack (Sep. 27, 2023)
About TechFreedom: TechFreedom is a nonprofit, nonpartisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.