Discover more from Big Tech on Trial
Is it Google "magic" or just user data?
Does user data make a better search engine? Google and the Department of Justice are in stark disagreement over an issue that could shape the outcome.
Welcome to Big Tech on Trial. This free newsletter is brought to you by the BIG newsletter with daily coverage on the Google antitrust trial.
Day 41: The Department of Justice calls Professor Douglas Oard to testify about how user-side data makes a better search engine, which is key to the government’s case.
As we round the corner toward the close of testimony in the 10-week Google Search antitrust trial, a lingering dispute remains: What makes Google Search great - data or innovation? Put another way: To what extent does Google’s access to user-side data improve search result accuracy, inform the development of new search features, and help Google protect users from things like spam and deceptive ads? Google has already put forward one test that says user data has a negligible impact on search quality.
Today, we heard the Department of Justice’s rebuttal…
On Day 41, we heard testimony from Douglas Oard, a Professor at the Institute for Advanced Computer Studies at the University of Maryland. The purpose of Oard’s testimony was to respond to an earlier Google expert witness, Edward Fox, who Oard said erred by downplaying the importance of user interaction data to the quality of Google’s search engine. To the contrary, Oard testified that user-side data (which broadly refers to recorded user interactions) “trains” Google’s search algorithm, motivates Google’s development of new search features, and allows Google to manipulate and de-index certain types of search results.
Now I’m going to say something that might sound at odds with how this case has been talked about publicly over the past 10 weeks: whether Google has a superior search engine is not on trial in the case. To the contrary, the Department of Justice alleges that Google does in fact have a better search engine. However - and this is important for today’s summary - the parties disagree as to why Google’s search engine is superior. And, if Google is found liable, this will be a determining factor in how to restore competition in the market for general search.
Google contends that it has a superior search engine because of design and engineering decisions. It’s not really about data, they say. But the Department of Justice alleges that Google’s access to “user-side” or “user interaction” data - like clicks, scrolls, mouse hovers, and duration of site visits - is critical for improving Google’s search engine, and that its default search agreements have excluded rivals from accessing the very data needed to compete along relevant metrics.
This should sound familiar. Earlier in the trial, we heard testimony from former Google engineer Eric Lehman, where the question of what makes Google great was front and center: Is it data or innovation? And of course, we heard the aforementioned expert witness testimony of Virginia Tech Professor Edward Fox, who testified that user data explains only 2.9% of the quality gap between Google and Bing search engines.
Professor Oard assembled a report in response to Professor Fox’s earlier study of the impact of user data on quality. (We won’t be able to see either of those documents, which is frustrating.) Professor Oard’s expert opinion was that Professor Fox failed to assess the effect of user data on several key components that improve Google’s search engine, including:
Search features like images, video
Professor Oard walked us through how user data improves each of these components of Google’s search engine. Regarding indexing, Oard explained that when Google “crawls” the web, it isn’t trying to find every page on the web (which he suggests is virtually impossible), rather they’re finding as many sites as they can, then rely on user-side data to index what users want to see. To evaluate whether any particular search result was valuable, Google looks at what people clicked on, where their mouses hovered, and how long they actually spent visiting a site before clicking away or returning to Google’s SERP. Links that do not align with Google’s or users’ interests are given a lower ranking or de-indexed.
Oard testified that Google also relies on user-side data to perform spelling corrections (like when Google displays or suggests alternative queries to the actual query entered) and to generate appropriate image responses. Image searches are particularly dependent on user-side data, Oard says, because of the lack of associated text to train Google’s search algorithm. To determine the relevance of image responses, Google will rely on other context clues and user behavior, like how long a mouse hovers over an image, or whether specific images are clicked on.
One of my favorite notes of the day: Google can tell, on at least some mobile devices, if a user zooms in on an image or changes what part of the page they’re looking at. That’s technically user-side data that Google can use to train its search engine and deliver different results to users.
Oard also described the user-side data benefits to search advertising. One of the primary risks associated with search ads, Oard explained, is that an advertiser’s interests might not match those of a user. In order to estimate the interest of the user, Google can look at the quality of the page that the advertiser would take the user to. They might look at whether other users have gone to that page or how long users spent on those “landing” pages. (We should expect a lot more details about how all of this works and how Google manipulates search ads in the upcoming Google Ad Tech trial.)
Oard repeatedly criticized Fox’s model for only using a “frozen system” analysis to assess the importance of user-side data, instead of analyzing the effects of user data in motion. Oard illustrated this point with an example: a search for “nice photos” and “Nice photos” would generate markedly different results the day before and day after the 2016 terrorist attack in Nice, France. Google’s SERP adapts moment-to-moment based on how users interact with search results.
An additional relevant piece of Oard’s expert opinion: Oard testified that user-side data also informs Google’s strategic development decisions. User interactions drive whether Google develops a new search feature, like pop-out boxes for game scores or stock prices, and how to deploy them. We were reminded of testimony by Apple SVP of Machine Learning John Giannandrea:
Question: “So the more queries a search engine sees, the more opportunities it has to improve in this manner?”
Giannandrea (SVP, Apple): “The more opportunities the engineers have to look for patterns and improve the algorithm, yeah.”
And testimony of Google VP of Search Pandu Nayek:
Question: One thing that Google might do is look at queries for inspiration on what it might need to improve on. Does that sound familiar?
Nayek (VP, Google Search): Yes.
Professor Fox’s central conclusion was as follows: “A company as efficient as Google but with Microsoft’s scale would not meaningfully benefit from increase in user interaction data.”
Professor Oard’s response to Professor Fox’s central conclusion: “I disagree and I’m surprised Professor Fox doesn’t disagree with this conclusion.”
On cross examination by Google, Oard conceded that there are many variables that inform search engine development and quality, but this wasn’t inconsistent with his determination that user-side data was a significant one of those variables. He was prodded about the diminishing returns of user data beyond a certain inflection point, but insisted that Google had not yet reached the point where the marginal value of additional user data no longer warranted the expense.
Why is it important if Google benefits from user data? Some of this is just a fascinating insight into how search engines work - and how much they know about you. But the reason it matters for purposes of this trial is that Google is paying tens of billions of dollars per year for something. If user data is the foundation of a superior search engine - as Oard and the Department of Justice argue it is - then Google may very well be paying to exclude competitors from access to that data. And if Google is creating a bottleneck that forecloses access to a critical input for competition, Google has some eplaining to do…
We expect the testimony portion of the trial to conclude on Day 42, Thursday, November 16, with testimony from familiar witness and MIT Economist Michael Whinston. Whinston you’ll remember from providing testimony at the close of the DOJ’s prima facie case. I still find the slide presentations accompanying his testimony to be an incredibly useful summary of the government’s case. I expect we’ll be entertained with a similarly erudite and well-organized rebuttal summary tomorrow.
We ought to have a better forecast of the coming timeline soon, but right now it looks like Google and the DOJ will be filing post-trial briefs, proposed statements of fact, and proposed conclusions of law in early- to mid-February. Reply briefs in March, and in-court closing arguments some time thereafter. Opinion in… May?
Civil litigation really is a game of hurry up and wait.
Thanks for reading Big Tech on Trial! Subscribe for free to receive new posts and support my work.