Distributions of Experiences
Whose deaths (and movements) matter? Big data and ICE; Zuck unbound; and more.
This is civic tech: Nathaniel Manning is leaving Ushahidi after 8 years helping run the Kenya-based civic reporting platform (the last six as CEO), and his reflections on the experience are worth reading. Point one: tech procurement in the development sector is broken, he writes, “we are incentivizing ‘pilotitis'” (rather than supporting long-term tech implementations). Point two: funders have to stop trying to evaluate tech platforms by the same metrics that they use to judge programs that directly save lives or document human rights abuses. And third: “not every solution needs to have a market-based business model. Some software should just be open source and free.”
This Climate Clock, built at the request of Greta Thunberg, is counting down the time we have to cut global carbon emissions enough to give the world a 67% chance of keeping warming to under 1.5 degrees Celsius. And it comes with a widget you can use to add it to your own website.
Whose deaths matter? Do movements change how Americans think and talk about social issues? A new study by Ethan Zuckerman and colleagues from the Center for Civic Media used Media Cloud to examine US media coverage of the deaths of 343 unarmed Black men and women at the hands of police between 2013 and 2016. They found that after Michael Brown‘s death in Ferguson when the #BlackLivesMatter really took off, news attention to subsequent killings almost doubled. And while that wave most subsided, news stories are now more likely to mention multiple victims—indicating that the media now treats these killings as part of a pattern.
Say hello to The Forge, a new online publication focusing on organizing strategy and practice, founded by Brian Kettenring of the Center for Popular Democracy.
Attend: TICTeC Local, focusing on the use of civic tech in local community settings, is coming up November 1 in London and the schedule is now posted.
Submit: The 2020 Code for America Summit, which is taking place in Washington DC March 11-13, is now accepting talk and panel proposals.
AI, Vey: In the past year, there’s been a lot of organizing in response to issues arising from the use of harmful AI-related technologies, as this nifty timeline built by Varoon Mathur, a fellow at the AI Now Institute, shows.
Related: An effort by Google to collect more faces of people of color to train a new facial recognition system is using a third-party contractor that is targeting homeless people and unsuspecting college students, offering them a $5 gift card and not telling them that their faces are being recorded, Ginger Adams Otis and Nancy Dillon report for The Daily News.
Training a single AI machine learning model can produce the equivalent carbon emissions of five cars used over their average lifetime, Karen Hao reports for MIT Technology Review.
Life in Facebookistan: A few takeaways from the portions of the leaked transcript of a July all-hands meeting Facebook CEO Mark Zuckerberg had with his employees, that Casey Newton of The Verge released Tuesday:
1. Zuck thinks the possible break-up of Facebook into smaller pieces, one of which would presumably still be called Facebook, is an “existential” threat, one he would “got to the mat” to prevent.
2. He’s not smart enough to realize that attacking presidential candidate Elizabeth Warren by name for wanting to break up big tech companies like his can only help her.
3. He also is promulgating a “too big to break up” argument regarding the global problems of election interference and hate speech, arguing that smaller companies like Twitter can’t devote anywhere close to the resources Facebook has to fighting those problems. By his logic, the best approach to preventing the spread of online misinformation or unacceptable speech would be China’s; after all, the government has even more resources than the largest private tech companies there.
4. Olympian doesn’t begin to describe Zuck’s perspective on the lived experience of the people he employs. Asked about the reports of traumatized content moderators who deal with the never-ending flow of toxic content Facebook sends their way to deal with, Zuck called them “overdramatic,” adding, “Within a population of 30,000 people, there’s going to be a distribution of experiences that people have.”
Within a population of the very, very few people who have ever had near-absolute power over the organization they run, the distribution of experiences we’ve seen is uniformly negative. I’m thinking of Adam Neumann, most recently fallen from WeWork, and Joi Ito, fallen from the MIT Media Lab, and Travis Kalanick, fallen from Uber. See this for details.
New York Times columnist Kara Swisher comments drily, “I would like to see Mr. Zuckerberg do the job of a Facebook content moderator for a day and face a fresh hell every minute and then say” that reports of how traumatizing the work is being “overdramatic.”
Commentator Matthew Yglesias was one of many observers wondering if Zuckerberg would use his platform’s ability to tilt the political playing field against his erstwhile critic Warren, tweeting: “If you work at Facebook and have a conscience, note that when Facebook takes criticism from conservative politicians, Zuckerberg tries to change things up to appease critics whereas when it takes criticism from progressive politicians he vows to ‘go to the mat’ to fight them.” Of course, Facebook would never, ever, twiddle with the experience of its users in ways that might affect their voting behavior and not tell them, would it?
Tech critic Douglas Rushkoff (and Civic Hall member) reminds us, with this oped for CNN, that meaningful regulation—”a functional government that represents the interests of its citizens”—is what Facebook really fears.
The former head of content standards at Facebook, Dave Willner, is not happy with the company’s decision to allow politicians more leeway using offensive speech than it allows regular users, as Steven Levy reports for Wired. Levy writes, “Not only is Facebook avoiding hard choices, Willner says, it is betraying the safety of its users to placate the politicians who have threatened to regulate or even break up the company. ‘Restricting the speech of idiot 14-year-old trolls while allowing the President to say the same thing isn’t virtue. It’s cowardice.’” (It’s worth noting that Willner and his wife were the originators of a highly successful online fundraising campaign for RAICES, a front-line immigrant aid group in Texas, last year.)
Tech and the dark side: With only 6,100 officers in ICE’s Enforcement and Removal Operations Division and more than 10.5 million undocumented immigrants, big data has become the agency’s secret weapon, as this must-read feature by McKenzie Funk in the New York Times Magazine shows: “public records make clear that ICE, like other federal agencies, sucks up terabytes of information from hundreds of disparate computer systems, from state and local governments, from private data brokers and from social networks. It piggybacks on software and sharing agreements originally meant for criminal and counterterrorism investigators, fusing little bits of stray information together into dossiers. The work is regulated by only a set of outdated privacy laws and the limits of the technology.” Funk also shows how this transformation in ICE’s methods began under President Obama, whose administration initiated several key data and analytics contracts with companies like Palantir.
You are reading First Post, a twice-a-week digest of news and analysis of the world of civic tech, brought to you by Civic Hall, NYC’s community center for civic tech. If you are reading this because someone forwarded it to you, please become a subscriber ($10/m) and support our work and support our work or sign up for our newsletter and stay connected with the #CivicTech community.