Scaling Back

Why civic tech shouldn't scale sometimes; infowars worldwide spread; and more.


  • This is civic tech: Say hello to Women in Civic Tech, a new community organized by Phone2Action of “women who are passionate about the power of technology to improve the relationship between people and government.”

  • Here’s a concise and important thread from veteran Bahraini human rights organizer and techie Esra’a Al Shafei on why the notion that good civic tech needs to scale is often wrong. One key point: “When a platform is carefully designed to focus on a very specific need and target community, the potential for impact is far greater than attempting to multiply it in different contexts where these ideas are either unnecessary or irrelevant.”

  • Sarah Jamie Lewis, the executive director of Canada-based Open Privacy, chimes in with her own experience in support, writing “One funder rejected one of our proposals with the sentence ‘we don’t believe your approach of focusing on unrepresented, marginalized communities will scale.’ That was the moment I fully gave up on a large part of the existing ‘human rights’ space.”

  • Adi Eyal, the founder of OpenUp (formerly Code for South Africa) reflects on the lessons he’s learned helming a civic tech organization.

  • Apply: Nesta, Wellcome Trust, the Cloudera Foundation, and Omidyar Network are collaborating to offer grants of up to £30,000 for experiments that generate new knowledge on how to advance collective intelligence to solve social problems.

  • Organizing for democracy: Sara Jaffe reports for The American Prospect on how the Labour Party’s community organizing unit is trying to rebuild a local base for the party in a time when many newer political formations function more likely email lists connecting to a charismatic leader.

  • Zahri Hirsi reports for BuzzFeed on how a new generation of teenage climate activists, many of whom are girls, are being harassed online. “They have faced a barrage of daily insults, seemingly coordinated attacks (like the one that targeted Lilly), creepy DMs, doxing, hacked accounts, and death threats. This is the new normal for young climate leaders online, according to BuzzFeed News interviews with nearly a dozen of the kids and their parents.”

  • Information disorder: Some of the top-line findings from a major new report from the Oxford Internet Institute on how countries around the world are using digital disinformation:

    – Organized social media manipulation has more than doubled since 2017, with 70 countries using computational propaganda to manipulate public opinion.

    – In 45 democracies, politicians and political parties have used computational propaganda tools by amassing fake followers or spreading manipulated media to garner voter support.

    – In 26 authoritarian states, government entities have used computational propaganda as a tool of information control to suppress public opinion and press freedom, discredit criticism and oppositional voices, and drown out political dissent.

  • TikTok may be the hot new social media platform, but as Alex Hern reports for The Guardian, it also censors content that might be embarrassing to the Chinese government.

  • At least 38 hate groups and hate figures, or their political campaigns, have paid Facebook nearly $1.6 million to run 4,921 sponsored ads, from May 2018, when Facebook began publishing its archive of political and social advertisements, to September 17, 2019, Alex Kotch reports for Sludge. This is despite Facebook’s community standards banning hate speech.

  • Privacy, shmivacy: Amazon is working on facial regulation guidelines that it hopes federal lawmakers will adopt, Jason Del Rey, reports for Vox. “It’s a perfect example of something that has really positive uses, so you don’t want to put the brakes on it,” the company’s CEO Jeff Bezos said. “But, at the same time, there’s also the potential for abuses of that kind of technology, so you do want regulations. It’s a classic dual-use kind of technology.”

  • Correction: In an item about ImageNet in Tuesday’s First Post, I stated that the site had removed about half of its 1.2 million facial images in response to the Roulette art project highlighting how people’s faces might be mischaracterized. In fact, ImageNet started removing the images two months before the Roulette site went public. More details here.

You are reading First Post, a twice-a-week digest of news and analysis of the world of civic tech, brought to you by Civic Hall, NYC’s community center for civic tech. If you are reading this because someone forwarded it to you, please become a subscriber ($10/m) and support our work and support our work or sign up to our newsletter and stay connected with the #CivicTech community.