Errors of Omission

Google Nest hidden mic, Pinterest's solution to disinformation, and more


  • This is civic tech: Here’s a primer from our Danielle Tomson, director of the new Forums @ Civic Hall, on what to expect from Anil Dash, Matt Mitchell and Maurice Cherry as we gather February 28th to discuss the state of the internet in 2019.

  • Privacy, shmivacy: Business Insider’s Nick Bastone reports that Google’s Nest Guard, the alarm-keypad-and-motion-sensor component of its Nest product, comes with an on-device microphone that consumers were never told about. The company, which is now integrating that feature with its Google Assistant service, claims that the microphone “was never intended to be a secret and should have been included in the tech specs. That was an error on our part.”

  • Gosh, a big tech company like Google not telling the public it was violating people’s privacy? That would never happen.

  • Back in 2016, an academic study of all apps and connected devices tied into the Nest found that if you really wanted to know how much your data was being shared with third parties you would need to review nearly a thousand contracts.

  • New evidence from Google and IAB shows that the real-time bidding system at the heart of ad tech broadcasts highly sensitive information “billions of times a day” about web users as companies bid for the opportunity to advertise to them, Fix Adtech reports.

  • It’s called sharenting: Kids entering their teens are beginning to discover, by Googling themselves, that their parents have been violating their privacy for years posting highly personal stories and information about them, and they are not happy, Taylor Lorenz reports for The Atlantic.

  • Information disorder, continued: Researchers with Guardians.ai, a tech company focused on spotting and disrupting disinformation campaigns, have uncovered an ongoing coordinated effort to pump out negative and polarizing messages aimed at damaging several leading Democratic presidential candidates, Politico’s Natasha Korecki reports.

  • Even without external instigation, the polarization effects of social media appear to be getting worse. That’s the conclusion of an in-depth report by Sean Rossman, Jessica Guynn, Brad Heath and Matt Wynn for USA Today that examined the trajectory of the Covington Catholic school controversy across Twitter. Klon Kitchen, senior research fellow for technology, national security and science policy at the Heritage Foundation, told USA Today, “We are building thick bubbles of information around ourselves, where it’s always self-reinforcing, where we are losing any kind of perspective on alternative views and where we are very excited about participating in the public takedown of people’s reputations. It’s not the only time we’ve seen this and it won’t be the last.” (Agreed, except “we” didn’t build these filter bubbles, tech platforms whose business models require high engagement levels and deep data collection built them.)

  • Casey Newton raves about Pinterest’s solution to anti-vaccination misinformation being posted on its site—it has decided to stop returning results for searches related to vaccinations since it has been unable to remove all the junk content. Users can still pin fringe images questioning vaccination to their own boards, but Pinterest has decided to stop giving them free viral distribution.

  • Life in Facebookistan: Harvard dropout Mark Zuckerberg sat down with Harvard Law professor Jonathan Zittrain for the first conversation in his series of promised 2019 engagements on the role of the internet in society, and somehow the thing reads like two old Harvard pals at the club. The transcript is here. The two spend a lot of time talking about how Zuckerberg thinks that Facebook is actually working in the best interests of its users (Zittrain’s “fiduciary” role) because, as Zuck says, “at the end of the day…’people choose to use it.” The word “choose” is telling. (A few minutes later in their conversation Zuck referred to people who “chose” to give their data to a developer affiliated with Cambridge Analytica.) You have to read all the way to the end, where Zuck talks about his interest in “brain-computer interfaces” where people can just type by thinking, and there’s that word again from Zuck: “I mean, presumably this would be something that someone would choose to use a product. I’m not– yeah, yeah. I mean, yes, there’s of course all the other implications, but yeah, I think that this is going to be– that’s going to be an interesting thing down the line. “

  • Chum, chum, chummerie: Also worth noting, a question from Zittrain that hypothesizes about a future Facebook that might someday help advertisers target people when they are emotionally vulnerable, seemingly forgetting that actually Facebook already has been exposed for doing that.

  • Zuck also says this with a straight face: “When we talk about privacy, I think a lot of the questions are often about privacy policies and legal or policy-type things, and privacy as a thing not to be breached, and making sure that you’re within the balance of what is good. But I actually think that there’s a much more– there’s another element of this that’s really fundamental, which is that people want tools that give them new contexts to communicate, and that’s also fundamentally about giving people power through privacy, not just not violating privacy, right? So not violating privacy is a backstop, but actually– you can kind of think about all the success that Facebook has had– this is kind of a counterintuitive thing– has been because we’ve given people new private or semi-private ways to communicate things that they wouldn’t have had before.”

  • End times: This sums things up pretty well—just watch all the way through.