Game theory explains how algorithms can drive prices up
The original version from This story appeared in Quanta Magazine.
Imagine a city with two widget merchants. Customers prefer cheaper widgets, so merchants must compete to set the lowest price. Dissatisfied with their meager profits, they meet one night in a smoky pub to discuss a secret plan: if they raise prices together instead of competing, they can both make more money. But this type of deliberate price fixing, called collusion, has long been illegal. Widget traders choose not to risk it, and everyone else enjoys cheap widgets.
For more than a century, US law has followed this basic pattern: Ban those back-to-back deals, and fair prices must be preserved. These days, it’s not that simple. In many parts of the economy, sellers increasingly rely on computer programs called learning algorithms that repeatedly adjust prices in response to new data about market conditions. These are often much simpler than the “deep learning” algorithms that power modern AI, but can still be prone to unexpected behavior.
So how can regulators ensure that algorithms set fair prices? Their traditional approach will not work because it relies on finding outright collusion. “Algorithms definitely don’t drink with each other,” says Aaron Roth, a computer scientist at the University of Pennsylvania.
However, a widely cited 2019 paper showed that algorithms can learn to collude implicitly, even when they were not programmed to do so. A team of researchers pitted two versions of a simple learning algorithm against each other in a simulated market, then allowed them to explore different strategies to increase their profits. Over time, each algorithm learned through trial and error to retaliate when the other lowered prices—lowering its own price by a disproportionate amount. The end result was high prices with the mutual threat of a price war.
Implied threats like this are also the basis of many cases of human collusion. So if you want to guarantee fair prices, why not force sellers to use algorithms that are inherently incapable of expressing threats?
In a recent paper, Roth and four other computer scientists show why this may not be enough. They proved that even seemingly benign algorithms optimized for their own profit can sometimes produce bad results for buyers. “You can still get high prices in ways that seem reasonable from the outside,” says Natalie Collina, a graduate student working with Roth and one of the authors of the new study.
Not all researchers agree on the implications of this finding—much depends on how you define “reasonable.” But it shows how delicate the questions surrounding algorithmic pricing are and how difficult it is to regulate.
