Day 7: Is a Recipe Any Good Without Ingredients on Hand?
A meeting of the minds dissolves in the cold light of morning as the parties spar over what DOJ's DFP remedy actually seeks. And a reflection that ties testimony back to the core goals of remedies.
Yesterday, I left off with a cliffhanger. In this edition, we’ll close the loop on that witness, see what his boss and another senior Google engineer had to say, reflect on what the testimony reveals about whether the DOJ’s remedies satisfy its goals, highlight intriguing new thoughts about the impact of AI on remedies from DOJ’s last witness— and learn about the lingering questions on Judge Brinkema’s mind.
But first, time to resolve the cliffhanger. One of Google’s own witnesses, Glenn Berntson, an Engineering Director for AdX and DFP (together called GAM), admitted on Day 6 that Google could provide DFP’s data and decision logic to publishers. Berntson seemed open to giving publishers more information—calling it a “good idea”— and almost seemed on the brink of negotiating, with DOJ’s Matthew Huppert, the transition into open source of DFP’s decision logic. This would be a huge break-through! One of DOJ’s two main structural remedies accomplished!
In the cold morning light, though, the momentum evaporated. Huppert did ask, “Would Google be willing to allow its publisher customers to audit what you call its ‘Final Logic’?” but Berntson demurred. Judge Brinkema then asked him if he was saying that it was technically feasible, and he retreated another step. He said it isn’t the code being open source that is the problem; the problem is having the code and the data in separate places. Somewhere along the way, Brinkema asked whether giving a third party logic without data was like giving someone a recipe without ingredients. Berntson said that giving logic would provide transparency to publishers, which sounds more useful than just giving a recipe. He also acknowledged that publishers would have ingredients (data) from their own customers. DOJ then pressed from a different angle, asking whether DFP’s massive historical bid data give a competitive advantage over smaller ad servers. Berntson dodged, disagreeing with the characterization. Data is a somewhat tricky subject for Google, as illustrated by another witness discussed below.
There are two different arguments going on about technical feasibility: one revolving more around AdX, and the other around DFP. They overlap and mix together, but they were separately addressed on Tuesday, making them clearer than usual.
Before returning to Berntson’s testimony about DFP, let’s first consider what his boss (who testified after Berntson) said about the technical feasibility questions that relate most to AdX. Noam Wolf is Engineering Lead for GAM. I don’t know how DOJ managed it, but Wolf was atypically forthright, for a Google employee. No agenda to exaggerate the barriers to migration or the complexity of source code was apparent. The questions DOJ posed were mainly on two topics: Google’s principles of good coding, and the timeline for AdX divestiture found in Google’s 2023-24 internal project. Wolf had worked on that project, as had George Levitte, who testified about it on day six. Wolf, too, said it was a “business divestiture” only, but didn’t dispute the plausibility of the timeline. Most of the questions, though, were on the principles for good coding, originally introduced during Jon Weissman’s day four testimony via the Google style guide for training coders.
Wolf did not dissemble, but simply confirmed industry best practices for coding. The principles in the style guide—simplicity, uniformity, flexibility, separation of concerns, and testability—were, of course, practiced by his team; they are practiced by all good coders, and his team, of course, are good coders. This question of the nature of Google’s source code and its dependencies is what I called the second feasibility argument. In my judgment, DOJ has won this argument decisively: everyone considers Google technically superlative, and the attempts by some Google witnesses to present its source code as an impenetrable rat’s nest—while, contradictorily, not insulting either their employer or themselves—have been wildly implausible.
This argument has been mostly associated with the divestiture of AdX. There is much more to technical feasibility, but the dispute has been largely about whether the product being migrated is—except for its size—tractable, normal, well-behaved, the sort of thing capable engineers should know how to handle. And it does seem to be.
The other technical feasibility argument, from Berntson, is specific to the Final Auction Logic, to separating it from the Remainder of DFP and making it open source. So, let’s go back to Berntson’s testimony.
Berntson argued that the Final Auction Logic (as DOJ’s proposal calls it), or Final Logic (as he and Judge Brinkema agreed Monday it was better called) cannot be separated from DFP and moved elsewhere to be administered by some other entity, such as Prebid. The problem, he argued, is sheer size. Not the size of the code, which he seems to accept others besides Google could handle, but the size of the data that the code operates on. Enhanced Dynamic Allocation (EDA), the jewel of Google’s ad tech system, demands such scale. Remember, from day six, that Google’s DFP decision logic processes all its data once a day, which takes ten hours on 4,000 computers, and applies the results to match ads to spaces. Berntson argued that if the data is in one place (Google’s DFP) and the calculations are done in another (say, at Prebid), the communication demands back and forth would be too huge and would make the decision impractically slow.
Berntson and Huppert went back and forth quite a lot, but here is the gist (as I understood it). Berntson thinks that DOJ’s proposal requires Google to stop doing these Final Decision Logic calculations on their own machines, but rather that they have to outsource it to open source world. Huppert, to the contrary, thinks the proposal allows Google to do the calculating in-house, as long as they use the open-source code that has been divested into some outside entity’s care. I would have thought that the answer to which of those is right would be well known, at least to the principals.
On the theory that the DOJ lawyer had better know their proposal inside out, whereas a witness is just reading it (for understanding, but not to live and breathe it), I would expect Huppert to be right here. Furthermore, from my limited understanding of the technology that Michael Racic, President of Prebid, described on day five, I thought Prebid’s software—which DFP’s decision logic would be joining (if Prebid is the administrator)—sits on GitHub’s server. Any publisher is free to download the software to their own server and run it there. That’s why Racic had that big discussion about versions of the software (1.1, 1.2,...1.7), because Prebid’s committees might be updating the software while older versions are still operating out there on many publisher’s servers.
So, I’m going to assume Huppert is right, and DFP Remainder will be allowed to go get their beloved, divested software and bring it home to be complete once more, except now under court supervision. If this is right, though, the court has to have some way to make sure Google isn’t modifying the code to version 1.G that violates its parole. It probably does have a way, somewhere in the behavioral measures.
In any event, Arielle Garcia of US v. Google delves deeper into the back and forth on remedy mechanics for those who’d like to parse tea leaves. But let’s take a step back from the technical weeds. Based on the testimony so far, how does all this meet the DOJ’s original objectives?
Recall from Julia Tarver Wood’s opening remarks on day one that the four objectives of monopolization remedies that DOJ gleaned from Supreme Court precedent are these:
1) “unfetter a market from anticompetitive conduct,”
2) “terminate the illegal monopoly,”
3) “deny to the defendant the fruits of its statutory violation,” and
4) “ensure that there remain no practices likely to result in monopolization in the future.”
Most of the behavioral remedies under debate seem to be aimed at objective one, unfettering the markets. Objective two seems to be the biggest problem, which is why the structural remedies are the main focus. Even if Google behaves angelically from here on, they still have monopolies unless forced to divest. And in light of the discussion between Berntson and Huppert, it seems that open-sourcing DFP’s auction logic does little to terminate Google’s 91 percent dominance of the ad server market; DFP apparently remains just as it was, but for its code being open source and transparent.
Objective three also seems to get little traction. A function or product that Google gets from the ten hours crunching data on 4,000 computers is EDA, which Berntson calls “state of the art.” I’m sure he’s right that they are state of the art. The data, the computing power, and the state of the art allocation of ads to spaces are all fruits of Google’s violations. The court may not be able to deny Google those fruits, if it must preserve state of the art products for Google’s customers. Berntson plainly assumed any viable solution here had to preserve the current caliber of Google’s offerings. But in the Supreme Court precedents Wood cited, I don’t see where the law imposes any such obligation on Judge Brinkema’s court. At the same time, judges have wide discretion in setting remedies, and Brinkema has not clearly shown her cards about how this will play into her thinking.
After Noam Wolf’s testimony, Nirmal Jayaram, Google Senior Director of Engineering, testified for Google. He touched more on buy-side tools than others have, but hit the usual themes: divestiture will hurt the ROI of publishers; open-web ads are trending down, anyway; etc. Like other Google witnesses, he denied that Google uses “first party data” for targeting users on open-web display ads— which struck some observers as inconsistent with what Google tells publishers, leading them to question whether Google is playing word games or misleading the court.
Finally, in the midst of Google’s case in chief, DOJ brought an out-of-order witness who was not available to testify along with the rest of DOJ’s witnesses. Rajeev Goel testified for DOJ by video for the last four hours of the day. Goel is CEO of Pubmatic, a major ad tech company. I suppose he was an appropriate final witness in that he covered and reiterated pretty much all the ground that the other business witnesses had covered on days one and two. He, too, hit all the themes for his side: the necessity of structural remedies, the failure of Google to improve its behavior even after its conviction, and so on. He did have far more emphasis on artificial intelligence than we have heard before. His company is already using it a lot. He gave a new twist to the divestiture argument: AI enables migration to happen several times faster than human engineers alone; all that work on modular dependencies is just what AI is good at.
At the end of Goel’s testimony, we got another reaction from Judge Brinkema. Goel, like most of DOJ’s witnesses, thought Google can be counted on to misbehave if it can get away with it. Judge Brinkema brought up what she called “the two elephants in the room,” which are the changed circumstances Google will be facing: first, whatever she decides, it will be a court order under which Google will have to operate, with the attendant threat of more serious sanctions for violating it; and second, Google is facing a number of private lawsuits over its monopolistic behavior. We have not, she suggests, heard enough about how these factors will change the calculus for Google.
So that presents another cliffhanger: how will Brinkema’s provocations shape the testimony yet to come during Google’s case in chief— and then the DOJ’s rebuttal?
I'm glad that I don't have the task of covering this trial. I'd be overwhelmed by the technical complexities. And yet, following the reporting, I feel like I know the essentials of what's going on.