Best of Intentions
The mastermind behind Macedonian teen troll farms; the quest to make political advertising on Facebook more transparent; and more.
This is civic tech: All the videos, interviews with speakers and delegates and talk slides from The Impact of Civic Technology (TICTeC) 2018 conference are now posted online.
Net neutrality activists are rallying today in 70 congressional districts in the hopes of building momentum for a House resolution to restore the FCC rule, boosted by the news that Republican Mike Coffman of Colorado has signed on.
Life in Facebookistan: Yesterday, one of the world’s most powerful leaders gave an interview to a reporter where he spoke at length about his goals and vision, but within hours of the interview being published he was forced to issue a clarification on one of his most crucial statements. I’m not talking about Donald Trump’s flip flop on Russia’s intervention in the 2016 election, but Facebook CEO Mark Zuckerberg needing to explain to Recode’s Kara Swisher that he personally finds Holocaust denial “deeply offensive.” He actually said those words in his initial interview, but he also explained that he didn’t believe content denying such things as the Sandy Hook massacre or, noting that he is Jewish, the Holocaust, should be automatically removed from Facebook since Holocaust deniers might not “Intentionally” be getting their facts wrong. “The reality is also that I get things wrong when I speak publicly,” he added, with ironic truth and misplaced empathy for Holocaust deniers. In his clarification, he insisted that he didn’t intend to defend the intent of people who deny the Holocaust, but insisted that, “Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services. If something is spreading and is rated false by fact checkers, it would lose the vast majority of its distribution in News Feed.”
Deborah Lipstadt, one of the world’s preeminent historians of the Holocaust, writes for CNN:
What Zuckerberg fails to understand — even though he claims this was not his aim — is that by saying deniers aren’t “intentionally” getting things wrong, he leaves open the possibility that they could be right. For someone with Zuckerberg’s massive profile and platform, this is breathtakingly irresponsible. Holocaust denial relies on such a robust set of illogical untruths that it is only possible to be a denier on purpose, contrary to what Zuckerberg says, intentionally….This is extremism posing as rational discourse. And his statements suggest that Zuckerberg has been duped by them into thinking that they’re any different than someone who proudly wears a swastika.
Zuckerberg’s initial emphasis on the intentions of a speaker in his so-called community are reinforced in the company’s community standards policy, which supposedly bans hate speech but will allow the use of derogatory terms if the speaker’s intentions are clearly positive. Richard Allan, the company’s VP for public policy, writes, “For example, the use of the word ‘dyke’ may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay. However, if someone posted a photo of themselves with #dyke, it would be allowed. Another example is the word ‘faggot.’ This word could be considered hate speech when directed at a person, but, in Italy, among other places, ‘frocio’ (“faggot”) is used by LGBT activists to denounce homophobia and reclaim the word.”
Internal documents leaked to Motherboard’s Joseph Cox show that Facebook has a set of thresholds for determining when a page should be banned. For example, a page administrator has to receive 5 strikes within 90 days to be deleted, but if 30 percent of the content posted within 90 days by other people on that page violate its standards that too is cause for deletion.
So, to be clear where Facebook currently draws its editorial lines: No female nipples at all except when breastfeeding, but if you want to talk about Holocaust denial or claim that Sandy Hook was a government plot, that kind of speech isn’t hate speech that it will ban, just nasty speech that it won’t help promote algorithmically on News Feed. Though it will allow a group of individual haters to use its platform to form a concentrated group, like the InfoWars page with its 900,000 followers, as long as they only do so less than 30% of the time. And it will happily profit from the ads they pay Facebook to run.
Facebook now says that in certain countries like Sri Lanka, Myanmar and India, where false information on the platform has incited underlying tensions and led to violence, it will began clamping down on that content in partnership with local civil society groups, Sheera Frenkel reports for The New York Times.
New York Magazine’s Max Read writes that since Facebook has something close to sovereign power, it needs a Constitution—or rather “some kind of new power structure” to mediate its role in the many countries whose social fabric it now so casually warps.
Facebook’s new approach to political advertising is continuing to ensnarl publishers with purely civic goals, as this tweet from the Texas Tribune’s Amanda Zamora, complaining about the rejecting of an ad promoting an effort to crowdsource questions for local candidates shows.
BuzzFeedNews and ProPublica are partnering to track online political advertising on Facebook, and you can help, writes Craig Silverman. Just install their Political Ad Collector on your web browser and it will automatically and privately keep track of what you are being targeted with.
Election security: Writing for Politico, Kim Zetter says the indictment last week of 12 Russian intelligence officers raises new questions about whether Georgia’s election systems were hacked in 2016.
Related: House Republicans are refusing to include the renewal of election security funding through 2019 in a spending bill coming to the floor today, Erica Werner reports for The Washington Post.
Remember the story of the Macedonian teenagers who just created fake political news sites plumping for Trump because it was an easy way to make money? Now group of BuzzFeedNews reporters have found that the content farms were launched by a well-known local media attorney who was working closely with two high profile American partners. “Macedonian security agencies are cooperating with law enforcement in the United States and at least two Western European countries to probe possible links between Russians, US citizens, and the pro-Trump ‘fake news’ websites, two senior Macedonian officials said.”
Speaking of misinformation, here’s a wonderful profile of digital sleuth Jonathan Albright by Issie Lapowsky in Wired.
Brave old world: A new study by Christoph Bartneck of the University of Canterbury in New Zealand finds that “people perceive robots with anthropomorphic features to have race, and as a result, the same race-related prejudices that humans experience extend to robots,” Evan Ackerman writes for the IEEE’s Spectrum magazine. Bartneck tells Ackerman, “Often robots are designed from the inside out, meaning that first all the functional parts of the robots are built and tested. Only at the end some sort of cover is added. How this cover affects the human users, or more broadly, how the robot as a whole is perceived by its users is more often than not only an afterthought. Therefore, racism has not been on the radar for almost all robot creators.”
As the debate over guns has intensified, the emojis for guns used by the major tech platforms have changed, from pistols to squirt guns, writes Jay Mollica for SFMoma.org, in an absolutely fascinating disquisition on the political side of online emoticons.
Housekeeping note: For the rest of the summer, First Post is going on an abbreviated schedule of two posts weekly, Tuesdays and Thursdays.