It’s hard to pinpoint exactly when we lost control of what we see, read—and even think—to the biggest social-media companies.
I put it right around 2016. That was the year Twitter and Instagram joined Facebook and YouTube in the algorithmic future. Ruled by robots programmed to keep our attention as long as possible, they promoted stuff we’d most likely tap, share or heart—and buried everything else.
Bye-bye, feeds that showed everything and everyone we followed in an unending, chronologically ordered river. Hello, high-energy feeds that popped with must-clicks.
At around the same time, Facebook—whose News Feed has been driven by algorithms since 2009—hid the setting to switch back to “Most Recent.”
No big deal, you probably thought, if you thought about it at all. Except these opaque algorithms didn’t only maximize news of T. Swift’s latest album drops. They also maximized the reach of the incendiary—the attacks, the misinformation, the conspiracy theories. They pushed us further into our own hyperpolarized filter bubbles.
A growing body of research suggests that social media is accelerating the trend, and many political scientists worry it’s tearing our country apart. It isn’t clear how to solve the problem. And new research suggests that one often-proposed solution—exposing users on the platforms to more content from the other side—might actually be making things worse, because of how social media amplifies extreme opinions.
If social media seems particularly infuriating lately, it’s possible that it’s as much about the way it shapes our perception of what’s going on as it is about the reality of the viewpoints and behavior of our fellow Americans.
It’s also possible that highly partisan media—something that was common at the birth of our nation but which the U.S. had a relative respite from during the age of broadcast media—is an unavoidable consequence of America’s foundational right to free expression. Technology only magnifies this natural effect of democracy.
One of the challenges of studying polarization is defining polarization.
There are different kinds. One, known as affective polarization, measures how much people of one party dislike members of the opposite party. Various measures of affective polarization have shown that over the past 60 years, it’s gotten much worse. Another kind, known as ideological polarization, measures how far apart members of each party are on all issues, such as abortion and gun control. This kind of polarization has, contrary to what you might think, remained relatively stable over time.
In other words, many Americans hate each other more than ever, but they don’t disagree with each other any more than they used to.
Taken as a whole, the literature on whether social media polarizes us is inconclusive, says Dr. Bail, a fact that Facebook itself has highlighted in its past responses to The Wall Street Journal coverage of the tech giant’s role in dividing America. Part of the reason it’s so difficult to isolate any one influence on the polarization of Americans, he adds, is that there are so many—from geographic self-sorting to long-term changes in the way political parties organize themselves.
It’s also impossible to do the kind of experiments needed for measuring the contribution of any one thing, he says: Imagine switching off Facebook for a whole country, just to see if that reduced political polarization.
To try to sort out what’s going on, researchers are instead creating mathematical models in which such experiments can be conducted. Like all simulations, these models are limited by the assumptions they make about the real world, yet they are giving rise to a new wave of intuitions and testable hypotheses about how social media affects us.
One such model, just published by researchers at Northwestern University, incorporates recent, and in some ways counterintuitive, findings by political scientists. One, from a 2018 study by Dr. Bail, is that when you repeatedly expose people on social media to viewpoints different than their own, it just makes them dig in their heels and reinforces their own viewpoint, rather than swaying them to the other side. (Dr. Bail’s study was conducted on U.S. users of Twitter, but other studies have begun to replicate it, he adds.)
In the past, social-media giants have been accused of only showing us content that agrees with our preconceptions, creating echo chambers or “filter bubbles.” The proposed solution, trumpeted by pundits of every stripe, was to change the social-media algorithms so that they would show us more content from people who disagree with us.
According to David Sabin-Miller and Daniel Abrams, creators of this latest model, exposing us to viewpoints different from our own, in whatever medium we encounter them, might actually be part of the problem. The reason is probably intuitive for anyone who has the misfortune to spend an unhealthy amount of time on Facebook, Instagram, Twitter, YouTube or even cable news. (During the pandemic, that’s more of us than ever.) Because social media and Balkanized TV networks tend to highlight content with the biggest emotional punch—that is, they operate on the principle that if it’s outrageous, it’s contagious—when we’re exposed to a differing view, it often takes an extreme form, one that seems personally noxious.
Mr. Sabin-Miller and Dr. Abrams, both mathematicians, call this effect “repulsion.” In addition to the “pull” of repeatedly seeing viewpoints that reinforce our own, inside of our online echo chambers, repulsion provides a “push” away from opposing viewpoints, they argue. Importantly, this repulsion appears to be a more powerful force, psychologically, than attraction to our own side of a debate.
As text messages, digital ads and social media continue to embed themselves deeper into our pandemic days and lives, an unprecedented amount of political mis- and disinformation threatens to disrupt the 2020 election. Illustration: Preston Jessee for The Wall Street Journal
Bad actors on social media—such as Russian agents who have been active in advance of the 2020 election, attempting to divide Americans further—already appear to recognize repulsion as a tool, says Mr. Sabin-Miller. These trolls will assume roles on both sides of an ideological divide, and play dumb to make one side of the debate look foolish, while playing down the extremity of views on the other side.
“A reason we have some confidence in our model is the people who are trying to polarize us are already doing what they should be, by our model, to be optimally effective,” he adds.
Another model by Vicky Chuqiao Yang, an applied mathematician at the Santa Fe Institute, explored a phenomenon political scientists have previously described: the way political parties have themselves become more polarized over time. Her model buttresses past work that suggested that political parties play to their more extreme constituents because it’s more strategically advantageous than trying to go for ideological moderates, who often swing to one party or the other.