Online Platforms & Political Violence
A new report that covers what online platforms can do to ensure they do not contribute to election-related violence; and harassment of scientists and the "anti-reality" industry.
Preventing Tech-Fueled Political Violence
On March 26, 2024, a working group of experts convened to discuss how online platforms can prevent election-related political violence. Their goal was to provide actionable recommendations to ensure these platforms do not contribute to potential violence, particularly during the upcoming U.S. election, but also with global implications.
As a result of this working group, authors Justin Hendrix, Yael Eisenstat, and Daniel Kreiss—along with contributors—have published “Preventing Tech-Fueled Political Violence: What online platforms can do to ensure they do not contribute to election-related violence” on CITAP’s Bulletin of Technology and Public Life.
The Washington Post’s “Tech Brief” covered the report, noting:
“Relying on online platforms to ‘do the right thing’ without the proper regulatory and business incentives in place may seem increasingly futile,” the authors write. But Kreiss said he’s hopeful that at least some tech companies will take election integrity more seriously as the severity of the threat becomes more apparent. “I am optimistic that they can move in this direction, in part because I think they benefit from operating in stable democracies.”
Here are the 7 recommendations from the report:
Prepare for threats and violence: Platforms must establish robust standards for threat assessment and crisis planning. This includes multidisciplinary expertise, scenario planning, crisis training, and engaging with external stakeholders transparently.
Develop and enforce policies: Platforms should implement clear content moderation policies that address election integrity year-round. This includes tackling election denialism and threats against election workers and systems, with transparent and context-sensitive enforcement.
End exceptions for high-value users: Politicians and influencers should not receive exemptions from content policies. Rules should be uniformly enforced to prevent the monetization of harmful content and the spread of election misinformation.
Resource adequately: Platforms need to scale up teams dedicated to election integrity, content moderation, and countering violent extremism. This should not rely excessively on third-party vendors that might lack the necessary expertise and responsiveness.
Transparency on content moderation decisions: Platforms must be transparent about significant content moderation decisions, especially for high-profile accounts. This includes establishing crisis communication strategies and explaining cooperation with fact-checkers.
Collaborate with researchers, civil society, and government: Platforms should work with independent researchers, civil society, and government officials to study and mitigate election misinformation. They should maximize data access and counter false claims about these collaborations.
Develop industry standards: Establish industry standards for speech moderation that prioritize protecting elections. An industry body, similar to the Global Internet Forum to Counter Terrorism (GIFCT), could help develop threat assessment capabilities, enforce consistent policies, and facilitate collaboration against democratic threats and political violence.
The report emphasizes that while the challenges are significant, platforms have a responsibility to act decisively and transparently to protect the integrity of elections and prevent political violence.
Publications and Appearances
Last week, Nature published a piece on the growing harassment of scientists across fields. Notable figures such as Peter Hotez, a well-known pediatrician and virologist, have faced severe harassment that has even led to real-world confrontations. Hotez’s experience is not unique, as many scientists globally endure online abuse, which can escalate to real-world threats. This problem spans across various scientific fields and targets researchers at all levels of public visibility.
Alice Marwick discussed how harassment can be worse for researchers who fall under specific systemic disadvantages, or discourage people from doing the research they want to do, noting: “People who are first-generation academics, or people of colour, or queer folks or women, these are already people who are bearing enormous burdens... I think that there’s a lot of people who are interested in studying controversial topics who don’t want to study them because they’re worried about this kind of backlash.”
Scientific American spoke with Francesca Tripodi and Daniel Kreiss on the rise of the anti-reality industry, which spreads misinformation and disinformation on various topics, from climate change and COVID-19 to abortion and gender issues. This movement, backed by deep-pocketed conservative think tanks and lobbying groups, aims to undermine scientific integrity and promote anti-regulatory agendas.
Francesca told Scientific American that one goal of repeating the myths and disinformation is to activate “deep stories” that are told so often that they feel true. Francesca also described how propagandists further sway the public through an “IKEA effect” in her book, “Propagandists’ Playbook”:
“Savvy pundits and politicians appropriate or create keywords and phrases—like “woke-ism” and “groomer”—that they tie to false narratives. By widely disseminating the keywords, the storytellers can embed them in search engine results. Researchers have found that the top results for “abortion pill” commonly spread misinformation and disinformation. In other cases, fossil fuel companies have spent heavily on Google ads that resemble search results.... Tripodi said the DIY “discovery” increases the value of the information seeking—like a chair you assembled on your own—and reinforces the story’s ring of truth.”
“It’s not just antiscience, because many of these groups are backing their claims with what seems to be scientific inquiry,” Francesca noted. Daniel called it “performance of science in an effort to claim legitimacy” in a separate interview—they are weaponizing the process of academic peer review to raise doubt and delegitimize perceived opponents. “Getting people to talk publicly about what they or loved ones lost after being conned by false narratives can help drive home the very real and often very personal costs of all the lies.”
The article calls for scientists, journalists, and institutions to combat disinformation by unmasking the motives behind false narratives and supporting honest, science-based conversations. Kreiss and Tripodi's insights highlight the importance of understanding and disrupting the strategies used by the anti-reality industry to protect scientific integrity and public discourse.