In conversation imagining how we might shape tech platforms to better uphold democratic values, Tressie McMillan Cottom noted the importance of regulatory oversight:
I don't think that platforms can be counseled out of understanding their role in the reproduction of some of the uglier impulses of public life unless they are made to do so through regulation, governance, and probably fundamentally structuring the economic incentives for them not to do so.
This week, two excellently-reported stories underscored the limits of tech companies’ self-regulation work on ethical AI.
First, Rachel Metz covered Google’s Ethical AI team in the wake of its leaders’ departures, from the paper on “stochastic parrots” that catalyzed Timnit Gebru’s exit to the firing of her co-director Margaret Mitchell and the restructuring of ethical AI work within the company:
With very few laws regulating AI in the United States, companies and academic institutions often make their own rules about what is and isn't okay when developing increasingly powerful software. Ethical AI teams, such as the one Gebru co-led at Google, can help with that accountability. But the crisis at Google shows the tensions that can arise when academic research is conducted within a company whose future depends on the same technology that's under examination.
Next, Karen Hao profiled Joaquin Quiñonero Candela of Facebook’s Responsible AI team, highlighting the disconnect between the team’s work on AI bias—to the exclusion of other known, documented problems with the company’s algorithms where the most promising solutions hindered engagement and growth metrics:
Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.
While regulatory approaches to governing artificial intelligence are still in discussion, Daniel Kreiss had one recommendation for where technology workers could focus their internal advocacy:
Recent publications and appearances
📄 New publication: CITAP affiliate Enrique Armijo has an upcoming piece in the Florida Law Review on “Reasonableness as Censorship: Section 230 reform, content moderation, and the First Amendment”
⬛ Want more on the online political advertising ‘black box’ from last week’s newsletter? CITAP affiliate Matt Perault and Scott Babwah Brennan have a follow-up piece in WIRED with recommendations on how to restore transparency.
🦠 “I think that we can confidently tell people that all three vaccines are excellent, that they look equally likely to stave off hospitalization and death—to almost completely, if not completely eliminate the possibility of those fates—and it makes sense to take the first available one… Then I congratulate them.” Zeynep Tufekci continues to translate vaccine efficacy for consumers.
👩🏿🎓 “What people tend to want from us, what an audience wants when they want to consume what Black women produce as our intellectual work, is they often want to consume our emotions and our experiences, which is not always the same as respecting our expertise and our intellectual contribution.” In honor of International Women’s Day, the MacArthur Foundation highlighted the voices of ten women within the ranks of their fellowship program, including Tressie McMillan Cottom.
📚 “When I cannot write, I take that to mean that I do not yet have anything to, well, say. It is a novel concept, but I do not write until I have a point of view or an argument. If I do not have either of those things, it means that I am not done reading.”
📰 Francesca Tripodi’s work on conservative evangelical media practices was cited in a Slate interview on the pitfalls of traditional assumptions about media and information literacy.
📺 Dr. Tripodi also appeared in “Media Literacy is Not a Silver Bullet,” saying “Many with[in] the right wing media ecosystem came to power by calling on audiences to engage in ‘media literacy’ and distrust journalists. In an interesting way, conservative media was among the first producers to stress to audiences not to trust what they hear and to “dig deeper” to find the facts themselves.”
🍿 “When we directly measured the browsing habits of a diverse national sample of 915 participants, we found that more than 9 percent viewed at least one YouTube video from a channel that has been identified as extremist or white supremacist; meanwhile, 22 percent viewed one or more videos from “alternative” channels, defined as non-extremist channels that serve as possible gateways to fringe ideas.” Brendan Nyhan cited Zeynep Tufekci’s work in his study of YouTube’s efficacy in removing extremist content from its platform.
🏖 "Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided.” One of Zeynep Tufekci’s pieces on pandemic mistakes we keep making was quoted in a Salon article about partisan responses to COVID-19.
📒 “...it’s not in the public interest to prosecute journalists for doing their job.” Faculty research affiliate David Ardia commented on the trial of Andrea Sahouri and the dangerous implications of prosecuting journalists for crimes while reporting.
Coming soon
On March 17, Tressie McMillan Cottom will give a policy talk at the University of Michigan on modern discourse. Dr. Cottom will also be giving this year’s Ed Mignon Distinguished Lecture at the University of Washington Information School on April 13, and the keynote address for the Association of College & Research Libraries 2021 Conference, taking place April 13-16.
“Informal, Criminalized, Precarious: Sex Workers Organizing Against Barriers” – CITAP is presenting a webinar series with Hacking//Hustling, the Cornell Gender Justice Clinic, and Berkman Klein Center in April.
…and for reading to the end, an excellent Twitter conversation on “I’m a developer, and I’m here to help” and the many ways it can go wrong.