Kleinberg and Raghavan (2021)
Algorithmic monoculture is a growing concern in the use of algorithms for high-stakes screening decisions in areas such as employment and lending. If many firms use the same algorithm, even if it is more accurate than the alternatives, the resulting “monoculture” may be susceptible to correlated failures, much as a monocultural system is in biological settings. To investigate this concern, we develop a model of selection under monoculture. We find that even without any assumption of shocks or correlated failures—i.e., under “normal operations”—the quality of decisions may decrease when multiple firms use the same algorithm. Thus, the introduction of a more accurate algorithm may decrease social welfare—a kind of “Braess’ paradox” for algorithmic decision-making.
Recommender systems today have two huge problems that are leading companies (sometimes at enormous pressure from the public) to rethink how they’re being used: technical bias, and business bias.
Pinterest sounds unusual
there are troubles that have plagued higher-profile social networks: viral misinformation, radicalization, offensive images and memes, spam, and shady sites trying to game the algorithm for profit, all of which Pinterest deals with to one degree or another. Here the company has taken a different approach than rival platforms: embrace bias, limit virality, and become something of an anti-social network.…
But what if optimizing engagement isn’t your ultimate goal? That’s a question some other social networks, such as Facebook and Twitter, have recently begun to ask, as they toy with more qualitative goals such as “time well spent” and “healthy conversations,” respectively. And it’s one that Seyal, Pinterest’s head of core product, says paved the way for the new feature the company is rolling out this week.
One of Pinterest users’ top complaints for years has been a lack of control over what its algorithm shows them, Seyal says. “You’d click on something, and your whole feed becomes that.” The question was how to solve it without putting the algorithm’s efficacy at risk. “Every person who runs a feed for an online platform will say, ‘Oh, yeah, we tried to make it more controllable. But when we tried to launch it, it dropped top-line engagement.’”
Eventually, Seyal says he decided that was the wrong question altogether. Instead, he told the engineers tasked with addressing the user-control problem that they didn’t have to worry about the effects on engagement. Their only job was to find a fix that would reduce the number of user complaints about the feed overcorrecting in response to their behavior.