Algorithms and public school bus schedules; Why Facebook is like a nuclear reactor; and more.
This is civic tech: MIT’s Joi Ito offers a cautionary tale in Wired about how a seemingly well-intentioned protest that he participated in against an algorithmic approach to revising Boston’s public school bus schedules was actually misinformed. It turns out that the protestors were nearly all white, and the designers of the algorithm had been trying to solve for equity, but city administrators hadn’t done enough to include the public in understanding the process.
Civic Hall teams up with Cognizant and Per Scholas to pilot an updated model for teaching digital tech skills, as part of our development of Civic Hall @ Union Square, as Crain’s NY Business reports.
Jill Byers and Jason Hibbetts report on the Code for America 2018 Brigade Congress that took place last month in Charlotte, North Carolina, and how the local volunteer civic tech movement is continuing to grow.
MuckRock’s Lucas Smolcic Larson reports on how cities across America are responding to the “micro-mobility revolution” with permitting requirements, data-sharing agreements and other regulations as e-scooter companies dump their products on urban sidewalks.
The 7th annual edition of Personal Democracy Forum Central Eastern Europe is slated for April 4-5 in Gdansk, Poland, and they’re calling for talk and workshop proposals now. The state’s chief minister’s smiling face is also set as the background image on the phones’ home screen.
Life inside Facebookistan: On Friday, the president said that the latest round of news critical of his administration was “bullshit” and he blamed leaks on “bad morale” being caused by biased media attacks. We’re talking about Mark Zuckerberg, the CEO and 60% owner of Facebook, by the way. Deepa Seetharaman of the Wall Street Journal reports that he has told his top lieutenants that the company is “at war” and needs to “make progress faster” on two goals: securing the platform and improving user growth. Her story helps contextualize the departure of the co-founders of Instagram and WhatsApp, who have all left Facebook in recent months over disagreements regarding monetization of their users. Because Facebook has finally and belatedly committed to spending significant sums on more content reviewers (its workforce has ballooned to 30,000 after years of underinvestment because “scaling”) Zuckerberg is apparently putting extra pressure on these divisions to make more money so he can keep throwing resources into the breach of his main platform by, well, lots of bad actors.
In a fascinating and important new policy statement posted last week, Zuckerberg says that, “starting today,” Facebook is going to start publishing the minutes of the regular meetings his policy team has with outside experts, including academics and journalists, to discuss how its evolving policies on free expression impact different communities globally. The first meeting minutes make for fascinating reading, especially if you have read Malka Older’s science fiction novel Infomocracy and are curious about how a global information police agency with seemingly benevolent intentions might work. No one is named in the policy team minutes, by the way—hello transparency theater!
In the same statement, Zuckerberg adds: “The team responsible for enforcing these policies is made up of around 30,000 people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total, they review more than two million pieces of content every day.” He also notes that his content review teams and system still err 10% of the time, meaning lots of bad content persisting on the platform. (What he doesn’t note is that people doing this work are exposed to horrible, toxic content on an hourly basis.) In the last two quarters, Zuckerberg also notes that FB removed more than 1.5 billion fake accounts from the site. That means on a daily basis, spammers, hoaxers and other bad actors are creating 12.5 million fake accounts a day on Facebook.
Zuckerberg projects that by the end of 2019, “we expect to have trained our systems to proactively detect the vast majority of problematic content.” Translation: “we hope our artificial intelligence systems can tame the beast we created.” It may be helpful to think of Facebook as a nuclear reactor in the middle of a meltdown, except the operator won’t remove the fuel rods because the thing makes so much energy. Instead he keeps throwing more and more workers at trying to contain the spill, but radiation keeps leaking out.
The president of Facebookistan also announced that he is in the process of setting up an “independent body, whose decisions would be transparent and binding” to give his people a way to appeal content decisions. “Starting today, we’re beginning a consultation period to address the hardest questions, such as: how are members of the body selected? How do we ensure their independence from Facebook, but also their commitment to the principles they must uphold? How do people petition this body? How does the body pick which cases to hear from potentially millions of requests? As part of this consultation period, we will begin piloting these ideas in different regions of the world in the first half of 2019, with the aim of establishing this independent body by the end of the year.” As New York Times tech writer Kevin Roose commented, “Facebook is starting a judicial branch to handle the overflow for its executive branch, which is also its legislative branch, also the whole thing is a monarchy.”
Writing in the Washington Post, Facebook’s former chief security officer, Alex Stamos, confirms The New York Times’ report that COO Sheryl Sandberg was mad at him for briefing the company’s board about the infestation of Russian-disinformation on the platform, and says that FB’s “public-communications strategy of minimization and denial” led to a “massive” loss of trust, but argues that the company isn’t the only guilty party—pointing a finger at the U.S. intelligence community for failing to provide tech platforms with actionable intelligence on Russia’s information warfare goals.
Pressure is still building on Facebook to do more than apologize for its hiring of Definers, a GOP opposition research firm, which targeted billionaire George Soros last year as part of a coordinated campaign to discredit criticism of the company. Color of Change, one of the groups targeted, has started a campaign to make the company fire Joel Kaplan, the DC-based Facebook executive responsible for Definers.
The Daily Beast’s Michael Tomasky says it’s time for Democrats to “crack the whip on Facebook.”
Vanity Fair’s Nick Bilton has apparently decided that he doesn’t want to be invited any future parties at Sheryl Sandberg‘s house.
Gina Bianchini, the founder of past social networks like Ning, writes that the Facebook era is ending and the future will be distributed among private messaging platforms, vertical social networks (like her Mighty Networks?), and highly curated membership communities.
Tech and politics: The New Yorker’s Jane Mayer reports on new evidence linking Steven Bannon and Cambridge Analytic to the #Brexit vote in the UK.
In the Indian state of Chhattisgarh, the ruling BJP party has given away 2.9 million smartphones to residents, and as Vindu Goel and Suhasini Raj report for The New York Times, now it is paying campaigns to call the recipients asking them to vote for the BJP.
Here’s how China successfully built a walled-off version of the Internet, as reported by Raymond Zhong of the New York Times.
Happy Thanksgiving to all—see you next week!