Dynamics of recommender systems and other AI social interventions at societal scale

Variational approximations to high modernism

July 3, 2023 — April 26, 2023

classification
collective knowledge
confidentiality
culture
economics
ethics
faster pussycat
game theory
how do science
incentive mechanisms
innovation
language
machine learning
mind
neural nets
NLP
sociology
stringology
technology
UI
wonk
Figure 1: Recommended for you

1 Matthew effects

Long-term Dynamics of Fairness Intervention in Connection Recommender Systems

Kleinberg and Raghavan (2021)

Algorithmic monoculture is a growing concern in the use of algorithms for high-stakes screening decisions in areas such as employment and lending. If many firms use the same algorithm, even if it is more accurate than the alternatives, the resulting “monoculture” may be susceptible to correlated failures, much as a monocultural system is in biological settings. To investigate this concern, we develop a model of selection under monoculture. We find that even without any assumption of shocks or correlated failures—i.e., under “normal operations”—the quality of decisions may decrease when multiple firms use the same algorithm. Thus, the introduction of a more accurate algorithm may decrease social welfare—a kind of “Braess’ paradox” for algorithmic decision-making.

The reign of Big Recsys - by Vicki Boykis

Recommender systems today have two huge problems that are leading companies (sometimes at enormous pressure from the public) to rethink how they’re being used: technical bias, and business bias.

2 Pinterest sounds unusual

Figure 2

How Pinterest Built One of Silicon Valley’s Most Successful Algorithms | by Will Oremus

There are troubles that have plagued higher-profile social networks: viral misinformation, radicalization, offensive images and memes, spam, and shady sites trying to game the algorithm for profit, all of which Pinterest deals with to one degree or another. Here the company has taken a different approach than rival platforms: embrace bias, limit virality, and become something of an anti-social network.…

But what if optimising engagement isn’t your ultimate goal? That’s a question some other social networks, such as Facebook and Twitter, have recently begun to ask, as they toy with more qualitative goals such as “time well spent” and “healthy conversations,” respectively. And it’s one that Seyal, Pinterest’s head of core product, says paved the way for the new feature the company is rolling out this week.

One of Pinterest users’ top complaints for years has been a lack of control over what its algorithm shows them, Seyal says. “You’d click on something, and your whole feed becomes that.” The question was how to solve it without putting the algorithm’s efficacy at risk. “Every person who runs a feed for an online platform will say, ‘Oh, yeah, we tried to make it more controllable. But when we tried to launch it, it dropped top-line engagement.’”

Eventually, Seyal says he decided that was the wrong question altogether. Instead, he told the engineers tasked with addressing the user-control problem that they didn’t have to worry about the effects on engagement. Their only job was to find a fix that would reduce the number of user complaints about the feed overcorrecting in response to their behaviour.

3 Incoming

4 References

Carroll, Dragan, Russell, et al. 2022. Estimating and Penalizing Induced Preference Shifts in Recommender Systems.”
Dean, and Morgenstern. 2022. Preference Dynamics Under Personalized Recommendations.”
Eilat, and Rosenfeld. 2023. Performative Recommendation: Diversifying Content via Strategic Incentives.”
Hron, Krauth, Jordan, et al. 2022. Modeling Content Creator Incentives on Algorithm-Curated Platforms.”
Kleinberg, and Raghavan. 2021. Algorithmic Monoculture and Social Welfare.” Proceedings of the National Academy of Sciences.
Lazar, Thorburn, Jin, et al. 2024. The Moral Case for Using Language Model Agents for Recommendation.”
Leqi, Hadfield-Menell, and Lipton. 2021. When Curation Becomes Creation: Algorithms, Microcontent, and the Vanishing Distinction Between Platforms and Creators.” Queue.
O’Neil. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making.”
Stray, Halevy, Assar, et al. 2022. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis.”
Stray, Vendrov, Nixon, et al. 2021. What Are You Optimizing for? Aligning Recommender Systems with Human Values.”
Teeny, Siev, Briñol, et al. 2021. A Review and Conceptual Framework for Understanding Personalized Matching Effects in Persuasion.” Journal of Consumer Psychology.
Xu, Ruqing, and Dean. 2023. Decision-Aid or Controller? Steering Human Decision Makers with Algorithms.”
Xu, Shuyuan, Tan, Fu, et al. 2022. Dynamic Causal Collaborative Filtering.” In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. CIKM ’22.