The 2020 election on Facebook
The first in a major series of studies into Meta's effects on politics.
Yesterday, Science and Nature published the first four pieces in a planned 16-study series drawing on Meta user data from the 2020 presidential election cycle. CITAP senior researcher Deen Freelon is one of the Facebook and Instagram Election Study (FIES) research team and a co-author on the papers released yesterday. The project is led by Talia Stroud of the University of Texas and Josh Tucker of New York University.
These studies are a big deal not only for the range of their questions and findings. The 17 researchers on the project collaborated directly with Meta to define their research questions and design the studies, making it possible to test the effects of changes to the news feed itself. The researchers’ access to data was limited, however, with Meta employees running analyses on the raw user data. And the project had an independent rapporteur, Michael Wagner of the University of Wisconsin, documenting the process and assessing the independence of the research team in this limited-access arrangement.
So, what are these first four studies, and what did they find?
Asymmetric ideological segregation in exposure to political news on Facebook
This study tracked all political news URLs shared on Facebook through a funnel of engagement from all content a user could have seen through the subset displayed on a news feed to the portion that users engaged with (clicked, reacted, reshared, or commented on). In reviewing that pipeline, the FIES team found that:
Ideological segregation on Facebook is high—liberal and conservative audiences are consuming different political news on the site.
Ideological segregation increases through the funnel of engagement—the political news that users click on, react to, or reshare is even more likely to be ideologically siloed than the contents news feed.
This segregation is asymmetrical, with a conservative news ecosystem consumed exclusively by conservatives that has no liberal equivalent.
Most misinformation is distributed in this conservative news ecosystem.
News sources favored by conservative audiences was more prevalent on Facebook.
Reshares on social media amplify political news but do not detectably affect beliefs or opinions
How does the re-sharing function affect what information users see on Facebook? This study removed most reshared content from participants’ feeds for three months. They found that this change:
Substantially decreased the amount of political news users see
Reduced overall clicks and reactions, including partisan news clicks
Decreased users’ news knowledge
Did not significantly affect political polarization or individual political attitudes
How do social media feed algorithms affect attitudes and behavior in an election campaign?
Algorithmic feeds have been the target of Congressional hearings and proposed legislation as policymakers seek to shape online speech. In this FIES study, a sample of participants saw a reverse-chronological news feed instead of the algorithmically generated feeds typically used on Facebook and Instagram. They find that a reverse chronological feed:
Displayed more untrustworthy and political content on both Facebook and Instagram.
Exposed less content classified as uncivil or slur-containing (on Facebook).
Included more content from moderate contacts and sources with ideologically mixed audiences (on Facebook).
Did not change the reported levels of political knowledge or political attitudes of the users who saw it over the three months of the study.
Like-minded sources on Facebook are prevalent but not polarizing
Finally, the FIES researchers explored the role of “echo chambers” on social media by using Facebook partisanship classifiers for users, groups, and pages to decrease the share of news feed content coming from “like-minded” sources for a sample of study participants over three months. As a baseline, they find that Facebook users do see a majority of content from like-minded sources—the median user’s feed includes 50.4% of content from like-minded sources versus 14.7% from “cross-cutting” (ideologically different) sources, and the remainder from sources whose party identification score was neither liberal nor conservative.
After modifying feeds to reduce like-minded content, the researchers found that doing so:
Increased their exposure to cross-cutting content
Decreased exposure to uncivil language
Did not change their ideological extremity, evaluations of political candidates, or beliefs in false claims
Alongside these four studies, Michael Wagner published a history of the project and his observations of the collaboration process between the researchers and Meta. This analysis considers how independent an industry-supported project like FIES can be:
One shortcoming of industry–academy collaboration research models more generally, which are reflected in these studies, is that they do not deeply engage with how complicated the data architecture and programming code are at corporations such as Meta. Simply put, researchers don't know what they don't know, and the incentives are not clear for industry partners to reveal everything they know about their platforms.
Elsewhere in the piece, he quotes another commenter making this same point:
One social media expert who was not on the team pointedly noted that, “what data is made available shapes what is asked and answered,” an issue for readers to actively consider when, for example, interpreting the degree to which the results reflect independent user behavior as compared to Facebook and Instagram's algorithms and other platform features affecting user behavior.
Wagner’s piece includes a note that these first four studies may not be representative of the full project’s findings, and that maintaining public interest for later papers will be important: “Though speculative, this could mean that this first wave of published articles will get the most public attention—because they are the first papers released in the project—even though other papers may produce results that show larger substantive effects.” He concludes with a call for more robust, defined structures for data-sharing and research that do not require platforms’ permission.
Reading this, I had three immediate thoughts.
First, tweaking a user’s feed on one social media platform for three months doesn’t change their political beliefs. Which makes intuitive sense—even in the context of discussing the potential sorting and siloing power of algorithmic recommendation systems, a lot of the critique is that they play into giving people more of what they want, rather than forcing them into new beliefs. And one platform only makes up a fraction of our overall information ecosystem and social connections. Given what we know about the role of identity in political affiliation, it’s understandable that participants’ attitudes would remain steady even as their news sources shift.
Another is that this is far from the last word on the impact of these platforms on our public lives. Later non-experimental papers could reveal far more about how these platforms influence our democracy. There are plenty of other ways to understand the effect of social media platforms that can’t be measured in whether they can shift a political polarization score or political attitudes on a survey, and those other effects could still be quite meaningful and important.
The last is that we still need major reform to ensure regular research access to platform data without an elaborately negotiated invitation. The research team behind this project is a true dream team—but for meaningful democratic oversight of tech platforms, critical research can’t be limited to elite, US-based researchers. It needs to be the work of many more hands, coming from a much wider range of fields, perspectives, and national contexts. Or, to quote Mike Wagner once again, “In the end, independence by permission is not independent at all.”
Publications and appearances
“Together, posts advance solidarity by naming political commitments, elevating shared stakes in addressing violence against Black and Asian communities, and highlighting how groups’ political futures are deeply intertwined. Importantly, it is not just shared histories of oppression that these organizations make legible on Instagram but shared histories of resistance.” Affiliate Rachel Kuo and Sarah J. Jackson explore how Black and Asian American feminists build political solidarities on Instagram.
“So they kind of get their marching orders from the social media content, from the podcast, from the chatter, and they go, oh, “is this where we…”—it's a way to indicate that you are a member of the team. This is identity politics, white identity politics 101. It's not about the song. It's about indicating that you are a member of the team. And it is much easier to indicate your membership by clicking play over and over again on Spotify. And the team knows that.” Tressie McMillan Cottom discusses country music and racial identity with Sam Sanders on the Into It podcast.
Coming soon
September 7 at 3pm: Lee McGuigan will discuss his new book Selling the American People. In-person at the Freedom Forum Conference Center and via livestream. Full details to come shortly.
August 30 at APSA: CITAP is cohosting the APSA Pre-Conference in Political Communication: The Age of Misinformation.
October 16 at CITAP: Misinformation and Marginalization Symposium. Registration information coming soon!
October 18 at AoIR: Alice Marwick, Yvonne Eadon, and Rachel Kuo are among the co-organizers of an AoIR preconference on future of conspiracy.
October 22 at the Annenberg Public Policy Center: The Post-API Conference.