Disinfo Wars

Hong Kong protest tech; Info disorder strikes home; Twitter goes Bluesky; and much more


This is civic tech: Here’s a comprehensive report by our own Matt Stempeck and Fiona Teng 鄧穎恆 on all the different technologies and digital tactics being used by the Hong Kong democracy protest movement, which is now more than six months old and still going strong
 
Global Voices is 15 years old, which makes a millennium in internet time. A big tip of the hat to Georgia Popperwell, Ivan Sigal, and the site’s huge community of contributors, who have collectively published nearly 100,000 posts since its beginning. Make a donation to help it keep going!
 
The How We Fix It podcast has a new episode up focused on Civic Hall member Ivelyse Andino and Radical Health, her Bronx-based health equity social enterprise that uses an app powered by artificial intelligence and community conversations to help black and brown pregnant women and new mothers understand their health care rights, build trust and develop self-advocacy. Give it a listen and subscribe—How We Fix It is planning to do more ongoing coverage of civic tech.
 
Twitter CEO Jack Dorsey announced yesterday that the company is funding a small independent team of developers called BlueSky to “develop an open and decentralized standard for social media” that Twitter itself would someday shift to. This could be promising.
 
Speaking of Twitter, here are two interesting Twitter accounts that I just learned about:
@CrowdfundCoops, an account that tweets whenever there is a crowdfunding campaign for a cooperative anywhere in the world; and @anti_ring, a project aiming to counter Amazon’s Ring surveillance network.
 
Speaking of Ring, there’s been a rash of stories of men hacking into the home security cameras and using them to spy on and taunt people in their homes, Bridget Read reports for The Cut.
 
Media matters: On Tuesday, I attended the morning half of “Disinfo 2020: Prepping the Press,” a day-long conference held by the Columbia Journalism School exploring how disinformation will affect the upcoming election. A number of impressive and valuable speakers offered their insights, including the New Yorker magazine’s Masha Gessen (who argued that not only was it reasonable for democratic governments to expect private platforms to subject paid speech to standards, it was time we stopped relying on private platforms to perform vital public media functions), and Syracuse University professor Whitney Phillips (who reminded the journalists in the audience how tricky it is to report on misinformation without also amplifying it, especially as people already inclined to believe an untruth will become more convinced of their belief upon encountering a debunking of it). Here’s Phillips’ excellent essay on the problem in the Columbia Journalism Review’s new issue on disinformation.
 
During a panel on the “new mechanics of voter suppression,” an audience member who identified with American Descendants of Slavery (ADOS), interrupted expert speaker Shireen Mitchell, who had been explaining how issues like reparations had been used by the Russian Internet Research Agency in 2016 to try to depress potential support for Hillary Clinton in the black community. (The interruption takes place here during video of the event.) Shortly after, other people claiming to speak for ADOS denied any connection to the interruption, leading one person in the audience, Harvard Berkman Center fellow Mutale Nkonde, to worry publicly that an event devoted to fighting disinformation was itself being hijacked by disinformation.
 
Unfortunately, the fight against disinformation has never been a clean one—especially when traditional authorities like governments and corporations have so often broken the public trust. This had led to a situation where public skepticism sometimes goes overboard and people over-correct with excessive assumptions about the behavior of powerful actors. That’s what I felt as I left the conference, just after keynote speaker Carole Cadwalladr, the investigative journalist who broke open the Cambridge Analytica scandal, declared that “a white billionaire is suppressing black votes,” referring to Mark Zuckerberg and Facebook.
 
For a much more nuanced examination of social media on voter suppression, read GWU professor Dave Karpf’s new essay in Mediawell, the Social Science Research Council’s new forum for “research on the digital edges of democracy.” After detailing the efforts of the IRA’s “Blacktivist” account on Facebook, which amassed 360,000 likes, more than the verified Black Lives Matter account, Karpf writes:

The Blacktivist Facebook page is clear evidence that the Russian government sought to amplify and exploit racial strife in US politics. But strategic intent is not strategic impact. And the ease with which researchers can now assemble and visualize data on these influence operations can mask the difficulty in assessing what the numbers actually indicate. Some (likely significant) portion of Blacktivist’s shares, likes, and comments came from the IRA’s own click farmers in St. Petersburg. Those click farmers are densely clustered. They share, like, and comment on one another’s posts—pretend Ohioans promoting the content of pretend Michiganders, increasing exposure on the Facebook newsfeeds of pretend South Dakotans. But Russian click farmers do not cast ballots. They do not turn out to public hearings. When a separate IRA-backed account used Facebook to promote offline anti-immigration protests, it was hailed as proof of the dangers posed by these foreign disinformation operations. Yet it is also worth noting that barely anyone showed up to those offline protests.

But after offering that necessary grain of salt, because online disinformation campaigns may not directly alter voter behavior—contrary to Cadwalladr’s inflammatory rhetoric—Karpf also writes, “Much of the attention paid by researchers, journalists, and elected officials to online disinformation and propaganda has assumed that these disinformation campaigns are both large in scale and directly effective. This is a bad assumption, and it is an unnecessary assumption. We need not believe digital propaganda can ‘hack’ the minds of a fickle electorate to conclude that digital propaganda is a substantial threat to the stability of American democracy.” Read the whole thing, there’s more good stuff.
 
Related: Nearly 90% of the ads posted by the UK’s Conservative Party in the lead-up to today’s national election have been labeled as misleading, according to a leading fact-checking organization, First Draft’s Alastair Reid and Carlotta Dotto report. The ads feature claims about the National Health Service and income tax cuts that Full Fact, an organization that Facebook works with for fact-checking (but not for political ads, remember!), has tagged as false or misleading.
 
A new research study finds that Facebook will change a campaign more when it tries to speak to people who don’t agree with it, than when they address people who do, Isaac Stanley-Becker writes for the Washington Post. Facebook responded that it is only showing ads to the people they will be most relevant to, but the researchers argue it is “wielding significant power over political discourse through its ad delivery algorithms without public accountability or scrutiny.”
 
YouTube has announced changes to its anti-harassment policies barring video makers from insulting others on the basis of their race, gender expression, or sexual orientation, Casey Newton reports for The Verge.
 
Not only is Alexa listening to you, humans working for Amazon spend a lot of time listening as well, with really creepy effects all around, Austin Carr, Matt Day, Sarah Frier, and Mark Gurman report for Bloomberg.

You are reading First Post, a twice-a-week digest of news and analysis of the world of civic tech, brought to you by Civic Hall, NYC’s community center for civic tech. If you are reading this because someone forwarded it to you, please become a subscriber ($10/m) and support our work.