8.8 C
New York
Thursday, March 28, 2024

Feedback loops and echo chambers: How algorithms amplify viewpoints

 

Feedback loops and echo chambers: How algorithms amplify viewpoints

File 20190131 108338 15ejh6z.jpg?ixlib=rb 1.1

Feedback loops in algorithms amplify chosen content, to the exclusion of others. Shutterstock

Courtesy of Swathi Meenakshi Sadagopan, University of Toronto

Whether it’s allegations of ethnic cleansing in Myanmar, anti-Muslim violence in Sri Lanka or the “gilets jaunes” protests in France, it is clear that social media platforms are helping spread divisive messages online at an alarming rate and potentially fueling offline violence.

But the debate is about whether these platforms are an essential cause, without which these events could not have happened, or merely reflect real-world tensions.

Algorithmic amplification is when some online content becomes popular at the expense of other viewpoints. This is a reality on many of the platforms we interact with today. The history of our clicks, likes, comments and shares are the data powering the algorithmic engine.

Some believe that the algorithms merely promote divisive behaviours already seen in the physical world. In fact, a 2015 Facebook research study concluded that user self-selection was to be blamed for the type of content seen in the news feed.

While that’s true, it’s only part of the story. Options can become increasingly narrower, and user choices can be restricted to increasingly extreme content. That is the effect of a phenomenon known as algorithmic confounding, a finding at the heart of the research published in October by researchers Allison Chaney, Brandon Stewart and Barbara Engelhardt at Princeton University.

Training the algorithms

Recommendation algorithms were created by companies such as Facebook, YouTube, Netflix or Amazon for the purpose of helping people make decisions. An array of options are recommended and a choice is made by the user that is then fed as new knowledge to train the algorithm — without factoring in that the choice was in fact an output shown by the algorithm.

This creates a feedback loop, where the output of the algorithm becomes part of its input. As expected, recommendations similar to the choice that was made are shown.

This leaves us with a chicken-or-egg dilemma: Did you click on something because you were inherently interested in it, or did you click on it because you were recommended it? The answer, according to Chaney’s research, lies somewhere in between.

But the vast majority of algorithms do not understand the distinction, which results in similar recommendations inadvertently reinforcing the popularity of already-popular content. Gradually, this separates users into filter bubbles or ideological echo chambers where differing viewpoints are discarded.

Experiences of users viewing a lighter version of a topic and being recommended more hardcore content have been documented. It is like starting on the edge of a spiral and travelling inward into the cores of amplification. Former YouTube engineer Guillame Chaslot reported this phenomenon during the Brexit campaign and the 2016 U.S. elections.

 

Feedback loops and filter bubbles

As an electrical engineer who now works on big data analytics problems, what was most surprising to me about this research was how the feedback loop exacerbates the effects of a filter bubble.

“As users within these bubbles interact with the confounded algorithms, they are being encouraged to behave the way the algorithm thinks they will behave, which is similar to those who have behaved like them in the past,” says Chaney. The longer someone’s been active on a platform, the stronger these effects can be.

After simulating the long-term effect, Chaney concludes that “as people within a bubble behave more similarly, the bubbles start to shrink,” limiting the range of users’ behaviour. This is eerily reminiscent of scenes from the 2003 movie Matrix Reloaded where Agent Smith morphs others into clones of himself.

While this worst-case scenario is unlikely, long before that happens we may become suspicious and mistrustful of algorithms losing the ability to benefit from them.

Algorithmic assumptions

Algorithms are treated as purely engineering challenges rather than the socio-technical problems that they are. As engineers, we are often more concerned with finding the most effective solution than its societal impact. So, even as companies expanded, the algorithms were updated using the same assumptions that went into building the first iteration of these systems. They were then optimized for more engagement.

How do algorithms work? Mozilla, April 2018.

Algorithm training data come with an inherent set of biases that reflect existing prejudices or is unrepresentative of the population it serves. When we fail to expose hidden patterns, associations and relationships in the training data and how representative it is of the general population, we create systems that propagate these biases and optimize for sameness of outputs.

Despite strong evidence showing that algorithms narrow options, other research studies question these conclusions. Dutch communications expert Judith Moller and others simulated the article recommendations of a Netherlands newspaper and reported a more diverse set of outputs comparable to those of human editors’ picks.

All studies, so far, are simulations of what we suppose are happening. The only way to know actual effects is to make the algorithms open to expert scrutiny. Additionally, as users, we need to critically examine our own biases and how these are reinforced by the recommended posts we click on.

Exploration or exploitation?

Algorithms don’t have to push us to more extreme content. New research is being done on engineering diversity, serendipity or novelty into recommendations to improve the range of choices users are shown. This can help better understand user interests, an approach called exploration in machine learning.

In research published in the Conference on Web Search and Data Mining in February 2018, machine learning researcher Tobias Schnabel and others show that users are resilient to being shown new, possibly uninteresting recommendations, helping learn more about their true preferences.

The researchers liken this to hiring a personal chef who almost always cooks your favourite dishes, yet surprises you with new options occasionally. You may discover that you like some of these new dishes, which then become part of your staple menu.

So far, machine learning algorithms have mastered recommending what’s likely to be most engaging. Breaking the feedback loop might entail mimicking the ways by which humans discover items of interest offline: through friends and family, expert advisers, happy accidents or serendipitous chance.

As artificially intelligent systems draw inspiration from human intelligence, we may end up with more enjoyable and safer social media platforms.

Swathi Meenakshi Sadagopan, Munk Fellow, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read more: Debate: The ‘gilets jaunes’ movement is not a Facebook revolution

The Conversation

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

157,452FansLike
396,312FollowersFollow
2,280SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x