As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “ Search, plus Your World “ (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with  cries of antitrust foul play . All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really  want  to be included or what  their  price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow  Amit Singhal   told  Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a  follow-up story , Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler  calls this “doublespeak”  and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this  dashboard ), as Sullivan  notes . But the public data  isn’t  available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter  ended  the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could  Google have included this data in SPYW, but rather  need  they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general,  as with issues surrounding the vertical integration claims against Google , product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably  by Yelp’s CEO  at the contentious Senate antitrust hearing in September.  Perhaps  one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating  non -public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter  end its agreement  with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a  short Google+ statement :

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their  rel=nofollow  instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft  may have paid Twitter $30 million  last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about  that .

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an  intense war for eyeballs : the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ ( 800 million  v  62 million , but even larger lead in  active users ), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “ to rule them all ”? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools ( Start Google+ , for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through  Start Google+ ). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright  argues .
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In  Aspen Skiing v. Aspen Highlands  (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in  Verizon Communications v. Trinko  (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in  Aspen Skiing  was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in  Aspen Skiing  seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it.  Trinko  limited the reach of the doctrine to the extraordinary circumstances of  Aspen Skiing , and thus, as the Court affirmed in  Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his  chapter  on search bias in TechFreedom’s free ebook,  The Next Digital Decade ).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright  has said  about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a  quarter as much time as Facebook .

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called  competition . The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for  costly errors  by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are  as much the object of innovation as their technologies . Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky.

The Fragmentation Claim

For some, the problem isn’t so much about antitrust but about the fragmentation of the web. John Battelle claims that tensions between search engines and social networking platforms  threaten our culture , and we need a “public commons” for social data to set things right. In the abstract (and the real world is never “in the abstract”), the claim has appeal: the Web users of today might, in some sense, be better off if Facebook, Google, Twitter, and Bing could all just “get along” and share social content among themselves seamlessly so that users could find content from any major social media platform on Google (or Bing, for that matter). Instead of facing a choice among major search engines that each only offer a fragment of potentially relevant social networking content, users in this Social Commons Utopia would choose search engines based on the quality of the algorithm, or other features—not on which social networks the search engine indexes. Meanwhile, users active in multiple social networks would enjoy a one-stop shop for searching content shared by their friends.

That all sounds well and good, but it misses the forest for the trees. The question isn’t simply about consumer welfare in a static snapshot of today’s marketplace. From that myopic perspective, commoditizing search might make a lot of sense. But of course, what’s ultimately important is that search keeps evolving to become more social and more … who knows what else the future will bring? Achieving a static “utopia” might end up killing the contentious rivalry that fuels the evolution of the market in ways that dramatically outweigh any short-term gains for consumers. Incorporating a realistic appreciation for that into a court-ordered “reasonable” deal is a  Sisyphean task —yet another reason why courts are (and should be) likely to err on the side of extreme caution about meddling here.

To be sure, a “public commons” for social data is an interesting idea, and it may well make sense someday. But how would such a regime, if implemented tomorrow, affect social networking firms looking to grow and innovate? Unlike Microsoft and Google, both among the world’s most profitable companies, Facebook and Twitter are still trying to figure out how to effectively monetize their massive user platforms. Inking creative deals to sell access to social data to search engines, or to other entities such as advertisers, is a logical way to generate the income that social networking companies need. This sort of arrangement may offend diehard believers in information commons, but it should seem perfectly natural to those who recognize that, to serve consumers, web companies need to innovate not just in new technologies but in strategies for monetizing those technologies.

Conclusion

Do we really want to live in a world where companies like Google have to wait to launch innovative new features until they’ve worked out how to to ensure that their competitors get to participate—on their competitors’ terms? This kind of “open access” requirement would be catastrophic for innovation. Even forcing companies to clearly define their terms of access on day one would essentially be equivalent to requiring them to file a rate tariff as if they were an old regulated utility—a recipe for stagnation, not innovation. Condemning Google to antitrust purgatory for failing to accept competitors’ offers to participate when those offers don’t even exist is nothing if not premature.

</>