TechFreedom President Berin Szoka testified at today’s hearing on the White House Privacy Report . The following statement can be attributed to Szoka:

As valuable as “privacy” can be, its value is not absolute. Privacy advocates and policymakers alike all too often overstate the value of privacy and understate its costs.  We should approach privacy like any form of consumer protection: Weigh harms against benefits and empower consumers to make their own choices wherever possible.  

The White House Report gets the most important question right: government lacks the “flexibility, speed, and decentralization necessary to address Internet policy challenges.”  However laudable the Report’s principles, what matters is pragmatically transposing them into concrete rules that recognize real-world trade-offs with innovation, convenience, and other competing values.

Before legislating, Congress should ask whether the FTC can adequately address substantial harms through its unfairness and deception authority.  The FTC must walk an exceedingly fine line on unfairness.  If used too seldom and defined too narrowly, unfairness will fail to protect consumers from real harm, suggesting legislation is needed when it is not.  But if defined too broadly, unfairness will again make the FTC the “National Nanny,” as the Washington Post dubbed the agency in the 1970s.  This time, the FTC will be micromanaging not children’s advertising and funeral parlors, but the very way we communicate with each other.  At its worst, the unfairness doctrine would likely have banned the camera, that great invader of privacy, back in 1890.  But at its best, unfairness could supplement self-regulation—if the FTC becomes more rigorous in its analysis.

Read Szoka’s full oral remarks below the fold and written testimony  here.

Chairman Bono-Mack, Ranking Member Butterfield, and Members of the Subcommittee, thank you for the opportunity to testify at this important hearing.

I commend you, Madam Chairman, for emphasizing the word “balance” in the title of today’s hearing.  As valuable as “privacy” can be, its value is not absolute. Privacy advocates and policymakers alike all too often overstate the value of privacy and understate its costs.  We should approach privacy like any form of consumer protection: Weigh harms against benefits and empower consumers to make their own choices wherever possible.

The White House Report gets the most important question right: government lacks the “flexibility, speed, and decentralization necessary to address Internet policy challenges.”  However laudable the Report’s principles, what matters is pragmatically transposing them into concrete rules that recognize real-world trade-offs with innovation, convenience, and other competing values.  Only a multistakeholder self-regulatory process can do this effectively.

To avoid “failure by design,” that process must be voluntary, as the White House promises.  Consumer advocates can play a vital role in offering constructive, specific contributions in public fora.  They can of course use public pressure to promote compromise within industry.  But as with the DAA process, the difficult work of forging consensus must take place in private, and it must be industry that ultimately votes.

There is more to be praised in White House and FTC Reports.  But the White House’s overarching approach is both, well, unfair and deceptive.  First, while the Report reminds us of the Fourth Amendment’s essential protection against unlawful intrusion, it neglects to mention that the Fourth Amendment protects us against such intrusion by government.  Using the term “Consumer Bill of Rights” just two months after a unanimous Supreme Court denounced excessive government surveillance in its Jones decision is a constitutional sleight-of-hand.  The real Bill of Rights remains in peril.

Second, while the Fair Information Practice Principles play a useful role in conceptualizing consumer privacy protection, they are not enough.  As Law Professor Fred Cate argues, the FIPPS have ultimately failed to serve consumers. Data protection laws should instead regulate information flows only when necessary to protect individuals from harm, while maximizing the flow of data.  This is precisely why it is so important that both the White House and FTC reports support proper de-identification of data as a way of balancing reasonable risks with the benefits of data-driven research and serendipitous innovation. To quote Cate: “Data protection is not an end in itself, but rather a tool for enhancing individual and societal welfare.”

Indeed, as the FTC declared in its 1980 Policy Statement on Unfairness, “Unjustified consumer injury is the primary focus of the FTC Act.”  The question policymakers should be asking is: What harms should the law remedy?   Where FTC authority has proven inadequate, Congress has passed laws to remedy clear harms, such as the Fair Credit Reporting Act.

But before legislating, Congress should ask whether the FTC can adequately address substantial harms through its unfairness and deception authority.  The FTC must walk an exceedingly fine line on unfairness.  If used too seldom and defined too narrowly, unfairness will fail to protect consumers from real harm, suggesting legislation is needed when it is not.  But if defined too broadly, unfairness will again make the FTC the “National Nanny,” as the Washington Post dubbed the agency in the 1970s.  This time, the FTC will be micromanaging not children’s advertising and funeral parlors, but the very tools by which we communicate with each other.  At its worst, the unfairness doctrine would likely have banned the camera, that great invader of privacy, back in 1890.  But at its best, unfairness could supplement self-regulation—if the FTC becomes more rigorous in its analysis.

Even as the FTC has lamented the inadequacy of its current authority, it has staked out a bold position on the scope of harm covered by unfairness.  While unfairness certainly can cover non-monetary harms, like reputational harms, the Unfairness Doctrine requires actual harm, not merely the risk of harm.

While the unfairness doctrine should never coerce compliance with self-regulation, it can validly punish laggards that persist in a practice disavowed by most of an industry.  For example, standard industry practice helped the FTC establish that it was unfair for the Frostwire Android app to share every file on users’ mobile phones without disclosing this when users did not expect this setting—and could not change it easily.  Unfairness is intended precisely to discourage such traps—but not to punish innovative new paradigms for sharing information.  If the FTC dictates “fair” design product based on static user expectations, innovations that change our thinking about privacy—like the camera in 1890—will suffer.

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.  Given the strong reputational incentives by companies to settle out of court, only Congress can call the agency to account, just as Congress once required the agency to produce its Unfairness and Deception statements. Congress should require the agency to explain how it has applied both doctrines to privacy.

Finally, Congress must ensure the FTC has the technical capacity for effective enforcement that balances harms with benefits.  The right measure is not how many lawsuits the agency brings but whether it effectively deters the occasional abuses of data while enabling and even encouraging the overwhelming benefits created by the steady flow of information.

Again, thank you for inviting me here to testify today.

</>