Around the world, countries and corporations are rethinking their relationship with encryption. In the wake of terrorist attacks, legislation in India and Australia has sought to give law enforcement access to encrypted communications, in moves that could threaten the security of encryption around the world. In the United States, Apple has staked its reputation on protecting encrypted communications even when they belong to terrorists — while Facebook pledged this year to shift the company to private messaging.
The moves have exposed obvious tensions between free speech and safety. In an effort to move the discussion forward, the Stanford Internet Observatory today held a conference in which tech platforms, government agencies, nongovernmental organizations, civil rights activists, and academics met to hash it out. I was among a handful of journalists who attended the event, and I came away mostly encouraged that all sides are determined to find a workable balance — even though it seemed clear that each group would strike that balance somewhat differently.
The government agencies want to maintain what they call “lawful access” to communications when needed for investigations, even if it means hacking into devices. Civil rights groups (represented today by the Electronic Frontier Foundation) warned that law enforcement is building a powerful surveillance operation and is increasingly arguing in court that they shouldn’t need a warrant to snoop on our communications. Tech platforms want to promote democratic free speech of the variety that produced Black Lives Matter and the #MeToo movement while also helping to catch terrorists and child predators. And nongovernmental organizations, such as those who work on protecting exploited children, worry that efforts to protect speech with encryption will make catching those predators much harder.
An example from the National Center for Missing and Exploited Children drove the point home. The organization has long operated a tipline in which people can report coming across child pornography and other incidents of abuse. In the late 1990s, the tipline received 200 to 300 reports per week, said Michelle DeLaune, NCMEC’s chief operating officer. But as the internet gained adoption, and platforms began collaborating with the organization, reports to the tipline exploded. In 2018, NCMEC received more than 18 million reports of exploitative imagery.
Strikingly, 99 percent of those reports come directly from the tech platforms. Through the use of artificial intelligence, hashed images, and partnerships between companies, we’re now arguably much better informed about the scope and spread of these images — and are better equipped to catch abusers. A Facebook executive says that the company bans a whopping 250,000 WhatsApp accounts a month for sharing child exploitation imagery. And a representative of GCHQ, the United Kingdom intelligence agency, said that in the UK last year 2,500 people were arrested due to NCMEC reports.
“That’s what we lose if we get this wrong,” said Crispin Robinson, GCHQ’s technical director for cryptoanalysis.
Meanwhile, the flip side of this discussion — the potential for government abuse of these tools — is on full display in Hong Kong. Maciej Ceglowski, the brilliantly acerbic writer-thinker-entrepreneur, recently returned from a month in the city-state reporting on the protests. He described how young pro-democracy protesters organize on Telegram, with the largely leaderless movement coordinating via in-app polls. Curiously, he said, the app has become popular even though its messages not end-to-end encrypted by default. But it allows users to find nearby protesters, to speak to thousands of them at once, and to send disappearing messages that make prosecuting them harder if they are arrested — and that has been enough to make it an anchor of the pro-democracy movement.
Ceglowski’s talk underscored a point made throughout the day’s talks: that something can be secure even if it’s not encrypted, and something can be unsafe even if it is. As with so much in our conversations about technology, security, and democracy, encryption debates can be emotional in a way that undercuts nuance.
Alex Stamos, who came to the Internet Observatory from Facebook and who organized the event, reminded the audience that encryption solves another, growing problem for platforms. As countries demand that they remove more speech from their servers, it becomes desirable to remove products from speech debates entirely. A company can’t moderate what it can’t see — and so it may increasingly have an incentive not to see it. There are lots of public-minded reasons a company like Facebook wants to promote encryption — but there are nakedly self-interested ones, too.
No one in the room on Thursday morning offered a simple solution for squaring all these circles. But it was heartening, at least, that they came into the room and were willing to have at least some of these discussions in public.
Trending up: Facebook updated the values that inform its community standards. It’s a small thing, but it offers welcome clarity on the circumstances under which the company is willing to restrict freedom expression.
Trending down: PewDiePie withdrew a planned $50,000 donation to the Anti-Defamation League after receiving backlash on Twitter. He told fans that he is not “personally passionate” about the charity, which … fights Nazis.
⭐ Federal regulators ordered Google to instruct employees that they can speak openly about political and workplace issues — including to the media — without facing retaliation. The mandate is part of a settlement of formal complaints regarding how Google has responded to these situations in the past, Rob Copeland reports at The Wall Street Journal:
The National Labor Relations Board’s move offers Google an escape hatch from a thorny issue that has roiled the business in recent years. Though Google executives have long bragged about having a workplace culture designed to encourage open debate, current and former employees across the political spectrum have complained that they were retaliated against for raising concerns about equality and freedom of speech.
The NLRB’s settlement comes in response to a pair of complaints about Google’s reaction to workplace dissent. The settlement orders Google to inform current employees that they are free to speak to the media—without having to ask Google higher-ups for permission—on topics such as workplace diversity and compensation, regardless of whether Google views such topics as inappropriate for the workplace.
Google agreed to pay $550 million to French authorities to settle a fraud probe into whether they dodged taxes. French authorities were investigating whether Google failed to report all its taxable activity in France. (Colin Lecher / The Verge)
A state attorney general who is investigating Google for antitrust violations makes his case in an op-ed. (Ken Paxton / Wall Street Journal)
Facebook’s Libra cryptocurrency will be blocked in Europe, the French finance minister said, citing threats to “monetary sovereignty.” The project also faces pushback in the U.S. and U.K. (Anthony Cuthbertson / Independent)
Facebook suspended a chatbot operated by Israeli Prime Minister Benjamin Netanyahu’s campaign team for violating hate speech rules after it sent a message saying Arab politicians “want to destroy us all.” (Isabel Kershner / New York Times)
Twitter went rogue on a new ad campaign, stenciling funny and light hearted tweets on city sidewalks in New York and San Francisco. City officials threatened to fine the tech company if it didn’t remove the stencils immediately, noting they have the resources to pay for legitimate ads. (Ryan Kost / San Francisco Chronicle)
145 executives, including the CEOs of Airbnb, Uber, Reddit, and Twitter, sent a strongly-worded letter to the Senate demanding action on gun violence. Among those who didn’t sign was Facebook’s Mark Zuckerberg. (Andrew Ross Sorkin / The New York Times)
LinkedIn CEO said regulating speech on social networks could have unintended consequences, such as stifling innovation. Jeff Weiner spoke out against proposed changes to Section 230 of the Communications Decency Act. (Kurt Wagner / Bloomberg)
Demand is growing for a digital watchdog agency to regulate the tech industry. Proponents say it would be more effective than breaking up big tech companies. Europe seems to be moving in this direction already. (Ben Brody / Bloomberg)
A U.S. lawyer prosecuting Huawei said a federal grand jury is investigating potential crimes related to a Xiamen University professor who ischarged with stealing trade secrets. (Patricia Hurtado / Bloomberg)
⭐ Google adjusted its algorithm to boost original reporting in search results. The move could incentivize publications to focus on publishing fresh content rather than aggregating old reporting. Sara Fischer quotes Google’s chief news executive on the news at Axios:
”In one section, the guidelines encourage raters to use the highest rating for reporting that provides information that would otherwise not be known if the article didn’t report it out. We also ask them to look into whether a news organization has a history of high-quality original reporting. We ask raters to go behind the article, where it’s coming from, who wrote it, etc.” — Richard Gingras, Google’s Vice President of News
Amazon is opening its crowdsourced Alexa Answers program — which lets users add answers to questions that Alexa doesn’t know — to anyone. (It’s been in a closed beta since December.) The system seems to lack a formal fact-checking mechanism, relying instead on users rating answers and flagging those that seem incorrect. (Chaim Gartenberg / The Verge)
Oculus founder Palmer Luckey’s virtual border wall start-up Anduril is valued at more than $1 billion after a new round of fundraising. (Salvador Rodriguez / CNBC)
Google Photos launched a new feature called “Memories” that brings back old photos and videos on their anniversaries in a format that closely resembles Instagram Stories. (Casey Newton / The Verge)
It only took a day for a new collaborative art project between Adobe and Reddit to be overrun by racist content. The project, hosted on a subreddit called Layer, allows users to post drawings and images on a shared canvas. (Ignacio Martinez / The Daily Dot)
AND FINALLY …
I believe this is what they call a sign of the times.