Yesterday, four of TechFreedom’s policy experts delivered remarks at the FTC’s May Open Commission Meeting. Their oral remarks are presented here, lightly edited for clarity.
Remarks of Berin Szóka, President of TechFreedom:
In 2011, President Barack Obama declared: “Our regulatory system must allow for public participation and an open exchange of ideas.” These are two different things, and the FTC isn’t really doing either.
In 2015, the FTC issued its first policy statement on unfair methods of competition. It recently rescinded and replaced that statement. It never sought public comment, as it’s done on merger guidelines. But it should have. Former Democratic FTC Chair Bob Pitofsky said so in 2008, as did Republican Commissioner Maureen Ohlhausen in 2015.
Open-mic sessions are no substitute for written comments. But comments aren’t enough. The FTC needs to hear a back-and-forth. That’s why the Federal Communications Commission has required reply comments in all rulemakings for 75 years. The FTC itself allowed for the filing of rebuttals before the Magnuson-Moss Act required them. TechFreedom recently requested a rebuttal round in the noncompete rulemaking, the most significant in FTC history. The Commission has ignored us.
Workshops could also facilitate an open exchange of ideas, but only if the Commission gives participants enough time to explore hard issues. The series of 14 workshops organized by my colleague Bilal Sayyed in 2018 and 2019 offers a good model. Most were multiday.
Most critical will be how the FTC conducts the hearings required by the Magnuson-Moss Act in consumer protection rulemakings. The Commission recently released the agenda for the first Mag-Moss hearing held in a new rulemaking in decades. Thirteen speakers get just five minutes each. Claiming that there were no disputed issues of material fact, the Commission authorized no cross-examination. So the hearing officer will be merely a timekeeper. That’s not a hearing; it’s just another open-mic session.
Despite broad consensus on stopping impersonation fraud, hard questions remain on how to craft a rule that won’t affect comedians, actors, or even kids’ Halloween costumes. If the Commission won’t allow a real exchange of ideas even on such an uncontroversial rulemaking, why should anyone expect it to do so in more complex rulemakings, such as commercial surveillance?
The Commission must do more to meet President Obama’s standard for open and participatory government.
Remarks of Andy Jung, Legal Fellow at TechFreedom:
Firms like Alphabet, OpenAI, and Stability AI provide AI tools to the public for no charge. These AIs help users accomplish a wide variety of tasks, including writing code, conducting research, and generating images of French bulldogs painted by Rembrandt.
Lawmakers clamor for new laws governing AI. This week, several senators proposed a new regulatory agency. But the notion that “AI is unregulated” is a “myth.” The FTC already oversees AI, as Chair Khan and Commissioner Bedoya have noted.
In April, Chair Khan and officials from the DOJ, CFPB, and EEOC released a joint statement asserting that their “agencies’ enforcement authorities apply to” AI. Specifically, the FTC’s unfair and deceptive trade practice laws apply.
The Commission may initiate enforcement actions against AI companies for deceptive claims and “unfair” actions which substantially injure consumers. Additionally, the Commission may promulgate rules prohibiting specific unfair or deceptive AI practices. Either way, the Commission would have to show that the practice “is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”
AI tools provide a variety of benefits to consumers and competitors in the marketplace. The Commission must weigh these benefits as it continues to probe the depth and breadth of its authority over AI. In that vein, I encourage the Commission to consider establishing a Federal Advisory Committee to inform and advise the agency’s regulatory agenda on this new and innovative technology.
Remarks of Santana Boulton, Legal Fellow at TechFreedom:
The overabundance of data in the modern world, warned Swiss scientist Conrad Gessner, is overwhelming; it is “confusing and harmful” to the mind. Of course, he was talking about the printing press, not artificial intelligence. The Commission has been asked to stop the release of AI tools. But applying the precautionary principle to AI development would have real costs. And Section 5(n) of the FTC Act requires the FTC weigh “countervailing benefits to consumers or to competition.”
Competition in AI is increasingly global. Cracking down on AI could help America’s global rivals and harm our national security. American companies are already using automatic threat assessment and developing tools to detect malware and data breaches. AI tools can help protect American weapons systems from cyberattack.
Consumers, also, stand to benefit from AI innovation. Consider AI’s medical benefits. New drug development is extremely expensive and time consuming. If even one treatment is discovered with the help of AI tools, those benefits must be accounted for.
More generally, research suggests that AI tools could help less skilled workers the most, increasing competition and rebuilding the middle class. This Commission cannot afford to discount this new technology’s benefits to low-wage workers.
Like any new technology—but even more so—AI will create a wide variety of both costs and benefits. The FTC can’t explore these tradeoffs through 2-minute blocks of prepared remarks. It needs to hold workshops that allow experts from multiple fields and with diverse perspectives to dialogue with each other. And in rulemakings, the FTC will benefit from the open exchange of ideas that is only possible if the Commission allows for reply comments.
Remarks of Bilal Sayyed, Senior Competition Counsel at TechFreedom:
In November 2018, the Office of Policy Planning, under the direction of then-Chairman Simons, and working closely with the Bureaus of Competition, Consumer Protection and Economics, held a two-day hearing on Algorithms, Artificial Intelligence, and Predictive Analytics.
Those two days of presentations and discussion remain the best public discussion of how AI and related tools will impact the Commission’s mission. The Commission should build on that record.
In 2019, after review and discussion of the record with my then-colleagues in OPP, I did not believe we had sufficient expertise in or with these tools to advise the Commission with any depth of sophistication on how AI would or should impact the Commission’s law enforcement or policy agenda and how the Commission’s law enforcement and policy agenda would affect the development of AI and related tools.
As then-Director, I was preparing a recommendation to the Commission that it establish a standing Federal Advisory Committee to inform and advise the Commission staff, the Chairman, and Commissioners on the likely impact of AI on the Commission’s law enforcement and policy agenda, and the impact of the Commission’s agenda on the development of AI.
The Commission’s rules implementing the requirements of FACA require, among other things, that an advisory committee have broad participation, meet in public, and receive comment from the public. The advisory committee can also be charged with answering a series of questions and to produce one or more final reports or recommendations to the Commission.
No recommendation was made to the Commission, but I believe this remains a good idea, and I encourage this Commission to give it serious consideration and to solicit, from the public, recommendations of persons to participate in such a committee on AI. It is not an alternative to the hiring of more technologists or other experts, but a complement and supplement that would have limited impact on the Commission’s budget.