How autocratic governments use Facebook against their own citizens

Last month, Facebook discovered evidence of a coordinated influence campaign on its platform led by groups in Iran. On Tuesday, a pair of investigations cast new light on other ways that autocratic governments are using Facebook to terrible ends: creating brigades of influencers and paid troll armies to suppress dissent and deny the reality of human-rights atrocities within their own countries.

In The New York Times, Declan Walsh and Suliman Ali Zway examine how the “keyboard warriors” of Libya use Facebook to hunt and kill their enemies. “Armed groups use Facebook to find opponents and critics, some of whom have later been detained, killed or forced into exile, according to human rights groups and Libyan activists,” they write. “Swaggering commanders boast of their battlefield exploits and fancy vacations, or rally supporters by sowing division and ethnic hatred. Forged documents circulate widely, often with the goal of undermining Libya’s few surviving national institutions, notably its Central Bank.”

Of course, it’s easier to hunt and kill your enemies when you can buy your weapons using the same platform you’re hunting them on:

The New York Times found evidence of military-grade weapons being openly traded, despite the company’s policies forbidding such commerce. Human traffickers advertise their success in helping illegal migrants reach Europe by sea, and use their pages to drum up more business. Practically every armed group in Libya, and even some of their detention centers, have their own Facebook page. […]

“The most dangerous, dirty war is now being waged on social media and some other media platforms,” Mahmud Shammam, a former information minister, said last week as fighting ripped through the Tripoli suburbs. “Lying, falsifying, misleading and mixing facts. Electronic armies are owned by everyone, and used by everyone without exception. It is the most deadly war.”

Meanwhile in the Philippines, BuzzFeed’s Davey Alba finds that the autocrat Rodrigo Duterte has found Facebook highly effective for harassing critics and contributing to a general sense of unreality. That’s been helpful for covering up the country’s estimated 12,000 extrajudicial state-sponsored killings since Duterte took office.

The broad outlines of the story of Duterte and Facebook were laid out nine months ago in a beautifully reported piece by Lauren Etter in Bloomberg. Alba’s story advances it by focusing on how three influential Duterte fans, one of whom became a paid government spokeswoman, coordinate to spread misinformation and targeted harassment against the strongman’s political opponents:

Nieto does publish news as well, both to his blog and directly on Facebook, where he posts “10 to 20 times a day,” he told BuzzFeed News. That news is typically unverified; sometimes it’s demonstrably inaccurate. Beyond the conspiracies noted above, Nieto has misquoted Canadian Prime Minister Justin Trudeau in a way that made it appear Trudeau supported a massive garbage dump in the Philippines. He’s promoted a falsified 1979 psychiatric report on the former Philippine president Noynoy Aquino, which claimed that the reason Aquino wanted to become president was “to heap a measure of revenge” on those who imprisoned his father, Benigno Aquino Jr., the rival of the late dictator Ferdinand Marcos, and a national hero who was assassinated in 1983. Nieto has also tried to artificially deflate the number of Filipinos murdered in Duterte’s bloody war on drugs. He has used Facebook Live footage of child autopsies in a crusade to blame a health crisis on the former administration.

Nieto speaks to an audience of more than 2 million Facebook followers. Each of his posts gets thousands of likes and shares, consistently more than the political commentators he’d be most comparable to in the US. He touts all this as evidence that everything is just fine in the Philippines. “They’re saying that freedom of speech is under threat. No,” he said. “It’s never been more democratic.”

The focus at tomorrow’s hearings in Congress — more on those below — will be on how foreign countries can use tech platforms to create discord here in America. But reading these investigations, I’m left wondering what authority will ask companies about the ways in which countries use their platforms against their own citizens.

Hearings

The tech platforms return to Congress on Wednesday for two hearings. In the morning, the Senate Select Committee on Intelligence will talk to Facebook’s Sheryl Sandberg, Twitter’s Jack Dorsey, and an empty chair meant to shame Alphabet for not sending CEO Larry Page. And in the afternoon, the House Energy and Commerce Committee will yell Jack Dorsey for an extended period of time.

A couple good previews are below, along with links to speakers’ testimony, all of which will sound familiar to anyone who reads this newsletter. The only interesting bit was this, from Dorsey’s testimony:

In preparation for this hearing and to better inform the members of the Committee, our data scientists analyzed Tweets sent by all members of the House and Senate that have Twitter accounts for a 30 day period spanning July 23, 2018 until August 13, 2018. We learned that, during that period, Democratic members sent 10,272 Tweets and Republican members sent 7,981. Democrats on average have more followers per account and have more active followers. As a result, Democratic members in the aggregate receive more impressions or views than Republicans.

Despite this greater number of impressions, after controlling for various factors such as the number of Tweets and the number of followers, and normalizing the followers’ activity, we observed that there is no statistically significant difference between the number of times a Tweet by a Democrat is viewed versus a Tweet by a Republican. In the aggregate, controlling for the same number of followers, a single Tweet by a Republican will be viewed as many times as a single Tweet by a Democrat, even after all filtering and algorithms have been applied by Twitter. Our quality filtering and ranking algorithm does not result in Tweets by Democrats or Tweets by Republicans being viewed any differently. Their performance is the same because the Twitter platform itself does not take sides.

Sheryl Sandberg’s New Job Is to Fix Facebook’s Reputation — and Her Own

Betsy Morris, Deepa Seetharaman and Robert McMillan look at how Facebook’s rough couple of years has chipped away at Sheryl Sandberg’s image as the consummate problem solver. It’s a good look at how her role has changed as she prepares to go before Congress:

Urged by his board to be more proactive, Mr. Zuckerberg quietly asked her to lead the company’s efforts to identify and prevent future blowups on the platform. The new job, insiders say, is at least as challenging as the company’s transition to mobile several years ago, which was late and rocky. Ms. Sandberg’s role is likely to be complex, expensive and thankless, people close to the company say, with any failures very public.

Twitter’s Jack Dorsey, Facebook’s Sheryl Sandberg testify in Washington: Preview

In his walk-up to tomorrow’s hearings, Peter Kafka worries for Jack Dorsey:

The best-case scenario for Dorsey is a very long day on Capitol Hill. But there are lots of ways for this to go badly for him. Part of this is a matter of seasoning and temperament: Dorsey does some public appearances, but he isn’t a professional talker. And when he does talk, he tends to approach questions with what can scan as a … detached affect. The bigger problem: While Dorsey and Twitter are well-versed in handling questions about election interference, the bias story is a new one, and Dorsey is going to spend an entire afternoon, by himself, handling it, at a session dedicated to “Twitter: Transparency and Accountability.”

Senator Mark Warner Is Not Happy With Google

Issie Lapowsky talks to the vice chairman of the Senate Intelligence Committee. He’s mad Google isn’t sending Larry Page to tomorrow’s hearings:

I was going to ask them why Google is building a search engine for China to allow Chinese censorship. Maybe they don’t want to answer some of those questions. But if Google thinks we’re just going to go away, they’re sadly mistaken. I’ve had a great working relationship with Google over the years, but I’ve been generally surprised that they might not want to be part of the conversation about how we fix this and get solutions.

Our testimony to the U.S. Senate Select Committee on Intelligence

Here’s Google’s testimony.

Read Facebook COO Sheryl Sandberg’s opening statement to Congress

Here’s Facebook’s testimony.

Testimony of Jack Dorsey

Here’s Dorsey’s testimony.

Democracy

Inside Twitter’s Long, Slow Struggle to Police Bad Actors

Georgia Wells and Kirsten Grind made waves over the weekend with a story that said Jack Dorsey personally weighs in on decisions like whether to ban Alex Jones. Twitter denied Dorsey does this, which was somehow even stranger. Like, the CEO just threw his hands up and said “y’all figure it out”? C’mon. Dorsey was more equivocal when Politico asked him about this on Tuesday: “I ask questions. I don’t think I’ve ever overruled anything,” he said.

Jon Kyl, Former Senator, Will Replace John McCain in Arizona

Jon Kyl, who is currently leading the “investigation” into complaints of conservative bias at Facebook, will have John McCain’s old Senate seat until 2020.

FCC chairman says Twitter, Facebook, Google may need transparency

Ajit Pai wrote a bad-faith Medium post calling on tech platforms to be more “transparent” about their decisions, which is really just a way of pressuring them to promote conservative voices, writes my colleague Jake Kastrenakes:

But Pai comes at it from the same approach as President Trump, cherry-picking examples to make it seem like these are liberal companies out to silence conservative voices, rather than platforms keeping their sites safe. One example he pulls out is YouTube demonetizing videos from PragerU, a nonprofit (which is not a university) that the Southern Poverty Law Center described as offering “dog whistles to the extreme right.” Among the videos pulled were several with Islamophobic titles like “Is Islam a Religion of Peace?”

Fringe Figures Find Refuge in Facebook’s Private Groups

Kevin Roose looks at how Facebook Groups, lately positioned as a potential solution to some of the company’s problems, can enable more bad behavior than public posts:

When it comes to more private forms of communication through the company’s services — like Facebook groups, or the messaging apps WhatsApp and Facebook Messenger — the social network’s progress is less clear. Some experts worry that Facebook’s public cleanup may be pushing more toxic content into these private channels, where it is harder to monitor and moderate.

Misinformation is not against Facebook’s policies unless it leads to violence. But many of the private groups reviewed by The New York Times contained content and behavior that appeared to violate other Facebook rules, such as rules against targeted harassment and hate speech. In one large QAnon group, members planned a coordinated harassment campaign, known as Operation Mayflower, against public figures such as the actor Michael Ian Black, the late-night host Stephen Colbert and the CNN journalist Jim Acosta. In the Infowars group, posts about Muslims and immigrants have drawn threatening comments, including calls to deport, castrate and kill people.

This Group Posed As Russian Trolls And Bought Political Ads On Google. It Was Easy.

“Google says it’s securing its ad platform against foreign meddlers, but for just $35 researchers posing as Russian trolls were able to run political ads without any hurdles,” Charlie Warzel reports.

The Real Story Behind The Anti-Immigrant Riots Rocking Germany

J. Lester Feder and Pascal Anselmi examine the role that Facebook has played in recent racial violence in Germany:

One of the engines for pumping out false information about the Chemnitz killing was the Facebook page of a group called Pro-Chemnitz, which has three seats on the local city council and organized the protest on Monday that ended in mob violence. In calling for the protest, it claimed the victim in Sunday’s stabbing was “a brave helper who lost his life trying to protect a woman.” The post is still online.

The group knows just how important Facebook is to its political fortunes. “We are completely social-media based,” said Benjamin Jahn Zschocke, the group’s spokesperson. “If our Facebook page were to be deleted, we would disappear completely.”

Inside Facebook’s ‘arms race’ to protect users ahead of midterm elections

Jo Ling Kent and Michael Cappetta talk to Samidh Chakrabarti, who helps lead the effort in fighting against influence campaigns, and learn that Facebook is building a physical “war room” to monitor threats in real time. (There’s a picture of the current “war room,” and it looks like a standard Facebook conference room.) Elsewhere, CNN talks to Facebook’s “top troll hunter,” Nathaniel Gleicher, who makes similar noises.

With less than two months to go, Chakrabarti said Facebook is “much more effective than we used to be” and the entire company is “laser focused on getting it right.” He also revealed new details on Facebook’s plans to build a physical “war room” to coordinate a real-time response to nefarious activity during the midterms.

India Pushes Back Against Tech ‘Colonization’ by Internet Giants

India may follow the European Union in passing strict new laws against tech platforms, Vindu Goel reports:

The proposals include European-style limits on what big internet companies can do with users’ personal data, a requirement that tech firms store certain sensitive data about Indians only within the country, and restrictions on the ability of foreign-owned e-commerce companies to undercut local businesses on price.

The policy changes unfolding in India would be the latest to crimp the power — and profits — of American tech companies, and they may well contribute to the fracturing of the global internet.

Tech Giants Now Share Details on Political Ads. What Does That Mean For You?

Natasha Singer has a helpful explainer on how to use the tech giants’ new political ads databases:

None of the archives is currently designed to search for phrases. That means, for instance, if you search the Facebook archive for “don’t go to vote” — a phrase that a Kremlin-linked group employed in a Facebook ad discouraging users from going to the polls — you’ll end up with thousands of resulting ads that used the word “vote.”

On Facebook, you’ll need to search by the name of the candidate or political issue you’re looking for. On Google, search under the candidate’s or advertiser’s name. On Twitter, look for the name of the account the ad ran under. Once you get results, you can click an individual ad to learn more.

U.S. accuses China of ‘super aggressive’ spy campaign on LinkedIn

Warren Strobel and Jonathan Landay report that there are people out there who actually enjoy using LinkedIn. They work for Chinese espionage agencies:

LinkedIn “is a very good site,” Evanina said. “But it makes for a great venue for foreign adversaries to target not only individuals in the government, formers, former CIA folks, but academics, scientists, engineers, anything they want. It’s the ultimate playground for collection.”

Elsewhere

Instagram is building a standalone app for shopping

It’s only the 94th most important Facebook story of the day, but this is still an interesting scoop from checks notes Casey Newton:

Instagram is working on a new standalone app dedicated to shopping, The Verge has learned. The app — which may be called IG Shopping — will let users browse collections of goods from merchants that they follow and purchase them directly within the app, according to two people familiar with the matter. Instagram declined to comment.

It could not be learned when the app might launch. Its development is still ongoing, and it could be canceled before it is released. But sources familiar with its development say Instagram believes it is well positioned to make a major expansion into e-commerce.

Alex Jones Said Bans Would Strengthen Him. He Was Wrong.

Deplatforming works, Jack Nicas reports:

In the three weeks before the Aug. 6 bans, Infowars had a daily average of nearly 1.4 million visits to its website and views of videos posted by its main YouTube and Facebook pages, according to a New York Times analysis of data from the web data firms Tubular Labs and SimilarWeb. In the three weeks afterward, its audience fell by roughly half, to about 715,000 site visits and video views, according to the analysis.

Can You Spot the Deceptive Facebook Post?

Keith Collins and Sheera Frankel put together a fun quiz in which you try to guess which posts are authentic and which posts come from influence campaigns. I got every single one correct, and if you read this newsletter I bet you will as well!

In India, Google races to parry the rise of Facebook

Google had a big head start as an advertising business in India, but Facebook is eating its lunch, Paresh Dave and Sankalp Phartiyal report:

Facebook’s success has shaken Alphabet Inc’s Google, led by an Indian-born CEO, Sundar Pichai, who has made developing markets a priority.

Google officials in India earlier this year were alarmed to learn that Facebook Inc was likely to generate about $980 million in revenue in the country in 2018, according to one of the sources. Google’s India revenues reached $1 billion only last year.

An Army Director Hired To A Top Immigration Post Spewed Anti-Muslim Comments On Facebook — Then He Lost The Job

We hear so often about people who get fired over their tweets, and almost never about people who get fired for their Facebook posts. Well, here is someone who got fired for their Facebook posts:

Guy Sands-Pingot, who was at one point a brigadier general, was tapped to be deputy director of US Citizenship and Immigration Services and was slated to begin in mid-September. Sands-Pingot would have served under an administration that is seeking to significantly cut back on the number of legal and undocumented immigrants and has, with the travel ban, targeted Muslim-majority countries. He would have also helped oversee an agency that recently created a denaturalization unit.

In October 2015, Sands-Pingot posted a link to an article from GOPTheDailyDose.com on Facebook with the headline “If you wipe your butt with your bare hand but consider bacon to be unclean, you may be Muslim.” The link on the page was dead but the headline has been part of anti-Muslim jokes spread on the internet for years, such as “If you were amazed to discover that cell phones have uses other than setting off roadside bombs, you may be a Muslim.”

Unpaid and abused: Moderators speak out against Reddit

Reddit isn’t doing enough to insulate its moderators from the abuse they suffer, reports Benjamin Plackett:

In a joint investigation, Engadget and Point spoke to 10 Reddit moderators, and all of them complained that Reddit is systematically failing to tackle the abuse they suffer. Keeping the front page of the internet clean has become a thankless and abusive task, and yet Reddit’s administration has repeatedly neglected to respond to moderators who report offenses.

Facebook Is Bingeing on Bay Area Real Estate

Noah Buhayar and Sarah Frier write up Facebook’s rapid real Bay Area real estate expansion. It now spans six cities and will soon employ more people in the region — 35,000 — than its home base of Menlo Park even houses.

Jake Paul’s predatory marketing tactics point to bigger regulation concerns

Is controversial YouTuber Jake Paul doing a bunch of illegal things to push his merchandise? I mean, I would believe it!

In one especially painful example, Nerd City highlights Paul’s video “THE BEST SONG WE’VE MADE YET,” in which the YouTuber relentlessly plugs his merch, tour, music, and more in nearly half of a 14-minute video. “Jake understands and leans into heavy repetition as a principal of advertising … the words are artificially jammed into the sentences he says,” Nerd City says. For those who are too young to buy his products on their own, Paul encourages kids to ask their parents directly — a practice sometimes described as “pester power,” which is prohibited in the European Union via the Unfair Commercial Practices (UCP) Directive.

Launches

Twitter is testing threaded replies and status indicators

Twitter is testing threaded replies and presence indicators.

TikTok adds video reactions to its newly-merged app

The app formerly known as Musical.ly adds a video commenting feature:

Instead of text comments, these reactions will take the form of videos that are essentially superimposed on top of existing clips. The idea of a reaction video should be familiar to anyone who’s spent some time on YouTube, but TikTok is incorporating the concept in way that looks like a pretty seamless.

Takes

It’s time to break up Facebook

Tim Wu’s new book is called The Curse of Bigness: Antitrust in the New Gilded Age. In it, he argues for a return to aggressive antitrust enforcement in the style of Teddy Roosevelt, saying that Google, Facebook, Amazon, and other huge tech companies represent a threat to democracy. He argues that Facebook should be required to sell off WhatsApp and Instagram:

“We live in America, which has a strong and proud tradition of breaking up companies that are too big for inefficient reasons,” Wu told me on this week’s Vergecast. “We need to reverse this idea that it’s not an American tradition. We’ve broken up dozens of companies.”

And finally …

@sweden signs off after seven years as Twitter voice of nation

In 2011 the people of Sweden had a crazy idea: what if it handed the keys to the national Twitter account to a different citizen each week? After 200,000 tweets from 365 different citizens, the account is now shutting down. But let us never remember the fun times we had:

The first curator was nicknamed “the masturbating Swede” after he detailed his preferred leisure activities. Others have fought with Denmark and Donald Trump, sparked outrage by asking why some people don’t like Jews, and admitted they’d rather be having sex.

Your move, @ireland!

Talk to me

Send me tips, comments, questions, and Instagram Shopping prototypes: casey@theverge.com.

Who’s really being silenced on Twitter?

“Social Media Giants are silencing millions of people,” the president mentioned today on his Social Media Giant of choice. “Can’t do this even if it means we must continue to hear Fake News like CNN, whose ratings have suffered gravely. People have to figure out what is real, and what is not, without censorship!”

It’s part of a recent campaign on President Trump’s part to depict Facebook, Twitter, and other platforms as hostile to conservative voices. Last month, he inveighed against “shadow bans.” An angrier take popped up August 18th, when he opined:

Social Media is totally discriminating against Republican/Conservative voices. Speaking loudly and clearly for the Trump Administration, we won’t let that happen. They are closing down the opinions of many people on the RIGHT, while at the same time doing nothing to others…….

You’re a smart person, and I won’t waste any of your time explaining why this is nonsense. (I already did, and not that long ago.) But Trump’s tweet hit me at a funny time — Twitter has felt unusually silent to me lately. The tweets that usually stream rapidly down my timeline each day have slowed to a crawl, and I’ve been trying to think about why.

I tweeted about this yesterday, noting that despite following more than 50 percent as many people as I did three years ago, my feed over the past month has slowed to a crawl. I heard back from many people who feel the same way. So why does the service feel so quiet lately?

On one hand, it’s August. News cycles are seasonal, and plenty of people — to their great credit! — take time off from the internet during summer vacations. This almost certainly explains some of the decline.

You could also explain it by looking at Twitter’s very observable decline: The company shed 3 million monthly users in the past quarter, according to its most recent earnings report. And while the company has never disclosed a daily usage number, it seems likely that plenty of people who were once daily or weekly users are now checking in fewer times than they used to.

But it also seems like there’s some sort of broader malaise, even among the famously Twitter-prone journalist class. (“I’ll admit that I just stopped tweeting except to promote my own work, and even then I don’t always do that,” my friend David Turner, who writes an excellent weekly newsletter about the streaming music industry, told me via email.)

But my favorite theory involves the president himself. Since Trump’s election, no story has so dominated Twitter in the United States like the president. Each day brings a fresh outrage or crisis or legal development to consider. Each story immediately generates an entire universe of tweetstorms and takes. A platform open to every kind of story can often feel as if it is singularly devoted to one.

And so if millions of people are being silenced anywhere, it’s certainly not the MAGA trolls and the Resistance. Increasingly, it feels as if it’s everyone else. Not because they can’t respond — but because for almost two years, nearly anything they can think to tweet about feels entirely beside the point.

Democracy

Kremlin Sources Go Quiet, Leaving C.I.A. in the Dark About Putin’s Plans for Midterms

For many reasons, we have way less information about what Russia may be doing to interfere with the midterm elections than we had in 2016, Julian E. Barnes and Matthew Rosenberg report:

Technology companies and political campaigns in recent weeks have detected a plethora of political interference efforts originating overseas, including hacks of Republican think tanks and fake liberal grass-roots organizations created on Facebook. Senior intelligence officials, including Dan Coats, the director of national intelligence, have warned that Russians are intent on subverting American democratic institutions.

But American intelligence agencies have not been able to say precisely what are Mr. Putin’s intentions: He could be trying to tilt the midterm elections, simply sow chaos or generally undermine trust in the democratic process.

Tech Companies Are Gathering For A Secret Meeting To Prepare A 2018 Election Strategy

The tech platforms are huddling over how to prevent coordinated interference, Kevin Collier reports. This is excellent news:

Representatives from a host of the biggest US tech companies, including Facebook and Twitter, have scheduled a private meeting for Friday to share their tactics in preparation for the 2018 midterm elections.

Last week, Facebook’s head of cybersecurity policy, Nathaniel Gleicher, invited employees from a dozen companies, including Google, Microsoft, and Snapchat, to gather at Twitter’s headquarters in downtown San Francisco, according to an email obtained by BuzzFeed News.

Facebook removes Syrian war page it believes is linked to Russian intel, Twitter keeps it online

Donie O’Sullivan has another case of Twitter’s platform moderation team acting like an extremely slow-motion version of every other platform’s:

The YouTube page associated with the group has also been removed, but on Thursday, the group’s Twitter account remained active, raising new questions about the level of coordination among social media platforms as they combat state-sponsored information warfare.

Volunteers found Iran’s propaganda effort on Reddit — but their warnings were ignored

The Iranian propaganda campaign unveiled this week was spotted by a group of Redditors a year ago, but they were ignored by Reddit, Ben Collins reports. On one hand, this points to the need for better customer service from platforms. On the other hand, Redditors will often point to a pile of corncobs and announce that it is the leading suspect in a bombing. So, rock and a hard place here.

The cure for Facebook’s fake news infection? It might be these women

Joan E. Solsman profiles the news product and news partnerships teams at Facebook. It’s a good piece focused on two of the company’s best advocates for journalists, Alex Hardiman and Campbell Brown. And it has these quotes that deserve more attention than it’s gotten:

Brown and Hardiman would argue that Facebook isn’t refusing its editorial role anymore.

The news teams decided their mission “meant actually having an opinion, taking responsibility for our content, and deciding that we were going to do a lot to actively prioritize quality journalism,” Hardiman says.

Brown calls it “a big step for Facebook,” this acknowledgment that Facebook must define quality news and promote it.

Infowars Said YouTube Ban Would Make It Stronger. Actually, It’s Been Crushed.

What if de-platforming is good? Will Sommer has some data points for us:

Two weeks later, though, the Infowars app is set to slip out of the top 30 news apps, and Infowars is nowhere near replacing its lost YouTube viewership.

Infowars currently hosts its videos on Real.Video, a niche video hosting site that promises that content on the platform is “protected under free speech” and prominently features other channels promoting militias or dubious nutrition ideas. Infowars videos on Real.Video regularly receive only a few hundred or thousand views.

”We received instructions from Facebook not to touch the posts of the cabinet minister”

A couple folks sent me this story based on an interview with a Facebook content moderator in Hungary. It’s about an anti-immigration politician whose video post on the subject gets removed for violating Facebook content policies, but later gets reinstated on the grounds that it was newsworthy. (To the extent that it was the subject of an ongoing political debate in the country, this seems fair.) In any case, the newsy nugget is that the moderator (codenamed “Zoltan” here) says Facebook stepped in to proactively prevent moderators from taking action on the politician’s future posts:

After we took Lazar’s video down, and they reinstated it above our paygrade, we received instructions not to touch any of his posts, along with posts of the pro-government news portal Orgio.hu, but to forward all reports concerning these straight to the Irish Facebook headquarters.

Elsewhere

Warehouse workers in Amazon program tweet positive comments about working conditions

Amazon regularly faces negative press cycles over the treatment of workers in its fulfillment centers. This month, an apparently legitimate group of employees created Twitter accounts — with company permission and, uh, encouragement — to talk about how much they love their jobs. Here’s Matt Day:

Identified by first names and “Amazon FC Ambassador,” they each opened a Twitter account this month, are unfailingly polite, and pepper emojis into conversations about the generosity of their benefits packages and job satisfaction at Amazon’s fulfillment centers, the company’s term for its sprawling warehouses.

In a typical interaction, one non-Amazon Twitter user opined that “the way Amazon treats its workers is shameful,” and linked to a news article about retailers that compete with Amazon.

Cindi, an Amazon “ambassador” from Etna, Ohio, replied with information about her work breaks.

“The way Amazon treats its employees is GREAT, we work hard, have fun and are always ready to make history,” she posted. “We have several break rooms throughout the facilities, I get two 30 mins breaks through my shift which is great.”

Google AMP beat Facebook Instant Articles, but publishers start to question AMP’s benefits

Most publishers aren’t seeing a monetary benefit from shifting to Google AMP, Lucia Moses reports:

“AMP had a lot of hype and promise,” said Chris Breaux, director of data science at Chartbeat. “It’s really good for users in providing a consistent experience in terms of page-load time. The real question is, do you see more traffic than you would have if you didn’t do the implementation? The answer for two-thirds of publishers is, no.”

Launches

Facebook’s Instagram testing college community feature

Instagram is making a new play to be the official college group chat of the class of 2021, Sara Selinas and and Michele Castillo report:

Instagram users are prompted to join a college community and “connect with other students.” Opting in adds a user’s university and graduating year — selected by the user from predetermined options — to their profile and grants access to class-based lists of other students who’ve opted into the community.

You can direct message or watch a user’s public Story directly from the community lists.

Facebook tests ‘things in common’ label to try to connect non-friends

Smart Hunter Walk speculates that this is a civility measure in disguise: maybe you’ll be less likely to troll someone in the comments once you see that you share a hometown and a favorite sports team. Anyway here’s Rich Nieva:

Here’s how it works: When you read through a public conversation – like on a brand or publisher page – Facebook will highlight things you have in common with non-friends who have left comments. So, under someone’s name, you might see a label that says “You both went to the University of Virginia,” or that you’re both from Phoenix.

Other things the label might highlight: if you’re both a part of the same public Facebook group, or if you work for the same company, but are not Facebook friends. The company said the idea is to spark connections people might otherwise pass over.

Inky’s book recommendation app helps you find new reads

Amazon acquired Goodreads five years ago and has the social book-reading market basically all to itself. As often happens in these cases, the company has done basically nothing with its acquisition, leaving room for scrappy upstarts. I wouldn’t put venture capital into a new social book-reading app, but I will absolutely download it over the weekend.

Takes

Here’s a data fellow at The Wall Street Journal arguing that the process for applying for WhatsApp research grants has been marred by technical problems:

And finally …

This Security Guard Filmed All His Farts for Six Months and Went Viral

Paul Blart: Mall Cop is a famously bad 2009 movie starring Kevin James as a cop who works at the mall. But nothing in the film could prepare us for the Instagram account @PaulFlart, in which the creator and star films himself farting while wearing his security guard uniform:

The internet’s next viral star is a security guard at a Florida hospital who spent the last six months publicly logging his sonically-perfect farts on Instagram. Now the 31-year-old is poised to turn his flash-in-the-bedpan success into a lucrative brand that can be summed up by his Instagram bio: The Fart Authority.

His first name is Doug (he declined to give VICE his last name, or the name of the hospital), but the Kevin James-looking everyman is known on his 20,000 follower-strong Instagram account as Paul Flart, a stinky offshoot of mall cop Paul Blart. On Wednesday, a video compilation of his most memorable ass clappers earned over 374,000 views after shooting to the top of Reddit’s r/videos forum. Now he has followers from all over the world. “It transcends all languages. There’s no translation necessary, it’s just funny,” he told VICE over the phone.

Somewhat inevitably, Doug got fired this week. But have hope: he has a Patreon. The internet is magic, have a nice weekend.

Talk to me

Send me tips, comments, questions, farts, weekend plans: casey@theverge.com.

Facebook should help us understand the link between political speech and violence

On Tuesday, The New York Times’ investigation of a study into how Facebook promoted anti-refugee violence in Germany galvanized discussion about how even normal political speech on the platform can drive users to extremes. On Wednesday, the report was criticized on the grounds that it may have unfairly linked correlation to causation, drawing more dramatic conclusions than can be supported by the evidence.

The case against the piece goes like this:

  • The study, which you can read here, has not been peer-reviewed.
  • The study authors could not measure actual Facebook usage, which is private, so they relied on problematic proxies. Their proxy for average, non-ideological usage of Facebook was the Nutella Germany page, with 32 million followers — but they managed to collect data on only 21,915 users who interacted with the page, and whose German location could be verified.
  • Data from the study is charted week by week, rather than in the moment. As Ben Thompson and others have pointed out, it seems just as possible that anti-refugee violence inspired Facebook posts than that Facebook posts inspired violence.
  • The article reported that Facebook was linked to a 50 percent increase in attacks on refugees; an update to the study revised that figure downward, to 35 percent.
  • The article represents a case of confirmation bias. People (like me!) who shared it tend to sympathetic to the idea that heavy usage of Facebook can be corrosive inside democracies, and so we accepted it without appropriate skepticism.

Some of these criticisms seem fairer to me than others. (The week-by-week chronology issue bothers me the most; I’ve reached out to the study’s authors, and will share anything I hear back from them in this space.) But none turns the original article on its head — or acknowledges that the Times journalists, Amanda Taub and Max Fisher, bolstered the study’s findings with their own on-the-ground reporting. (Thompson did note the additional reporting.)

And while the study hasn’t been peer-reviewed, the Times authors did seek input from other experts, who called the findings “credible, rigorous — and disturbing.” It also seems worth noting that the Economist also covered the study when it was first published earlier this year, and drew similar (if somewhat less agitated) conclusions.

In any case, I can’t imagine anyone reading the study and, even accounting for its flaws, not believing that further inquiry is warranted. “More study is needed” is perhaps the most common conclusion to be drawn from any study, and this one is no exception.

But as lots of folks noted online today, further study is difficult, because Facebook data is private by default. As New York’s Max Read put it: “The frustrating thing about the justified quibbles around this Facebook hate-crimes study is that Facebook itself could, in a couple hours, pull together a comprehensive data report that would answer all of the questions.”

The data in question is generally private for good reason — make it public, and you’ve got a Cambridge Analytica situation on your hands. But given the urgency of the question — does Facebook push normal political speech to extremes, inciting violence even in developed nations? — I wish Facebook would find a way.

Thompson doubts the company will:

Of course at best this sort of study will be done for internal consumption; I suspect it is more likely it won’t be done at all. Facebook has publicly buried its head in the sand about filter bubbles at least twice that I can remember, first in 2015 with a questionable study whose results were misinterpreted and last year on an earnings call.

The reason why seems clear: unlike fake news or Russian agents, which involve a bad actor the company can investigate and ban, the propagators of filter bubbles are users ourselves. To fix the problem is to eliminate the temporary emotional comfort that keeps users coming to Facebook multiple times a day, and that is if the problem can be fixed at all. Indeed, perhaps the most terrifying implication of this study is that, if true, the problem is endemic to social networks, which means to eliminate the former necessitates the elimination of the latter.

On the second point, I fear Thompson is right. And on the first — that Facebook will ignore studies like this — I can only hope he’s wrong.

Democracy

Democratic Party Says It Has Thwarted Attempted Hack of Voter Database

Someone is still trying to hack the Democratic National Committee, Sheera Frenkel and Jonathan Martin report:

A cybersecurity researcher from a firm called Lookout contacted the D.N.C. on Tuesday about the attempted intrusion, said two officials briefed on the matter who were not authorized to speak publicly.

The F.B.I. is investigating, according to one of the officials. But the attempted hack, which was described as sophisticated, was not successful, the committee said.

Taking Down More Coordinated Inauthentic Behavior

Facebook updated its post from yesterday, in which it revealed that it had removed more than 600 accounts and pages that it had identified as being part of Iranian and Russian influence campaigns, with examples of their posts.

An Update on Our App Investigation

Facebook has now suspended 400 apps as part of its post-Cambridge Analytica audit, including one from the University of Cambridge, which had 4 million users, Facebook’s Ime Archibong says:

It’s clear that they shared information with researchers as well as companies with only limited protections in place. As a result we will notify the roughly 4 million people who chose to share their Facebook information with myPersonality that it may have been misused. Given we currently have no evidence that myPersonality accessed any friends’ information, we will not be notifying these people’s Facebook friends. Should that change, we will notify them.

Zuckerberg and his co-founder pour millions into midterm initiatives

Facebook co-founders Mark Zuckerberg and Dustin Moskovitz are backing campaigns to support housing and criminal justice reforms in November, David McCabe reports:

Both organizations have given $1 million each to a group supporting an Ohio ballot initiative that would institute criminal justice reforms, including reclassifying drug possession crimes from felonies to misdemeanors.

The Chan Zuckerberg Initiative gave $250,000 to support a ballot measure in California that would fund affordable housing projects.

Facebook reinstated Crimson Hexagon, but questions linger

Facebook shut down API access for a data-mining firm named Crimson Hexagon last month after the Wall Street Journalreported that it was working with the US government and a nonprofit tied to the Kremlin, in violation of its policies. But it’s back on the platform now, reports Alex Pasternack:

The reinstatement, which began earlier this month, followed “several weeks of constructive discussion and information exchange,” said Dan Shore, Crimson’s chief financial officer. But the companies didn’t specify the results of the inquiry or explain why access was restored, raising more questions about how Facebook and other platforms police third parties like Cambridge Analytica and Crimson Hexagon.

China shuts down blockchain news accounts, bans hotels in Beijing from hosting cryptocurrency events

China is shutting down WeChat news channels if they publish news about the blockchain:

Elsewhere

Nobody Trusts Facebook. Twitter Is a Hot Mess. What Is Snapchat Doing?

Sarah Frier gets some rare interview time with Evan Spiegel, who uses it to promise everyone that he is studying very hard to be a CEO. The story is full of good details, but the heart-shaped “talking piece” geode is the part you want to read:

On the second floor of the new headquarters of Snap Inc. in Santa Monica, Calif., is a room dedicated to helping employees open up. It’s round and lined with potted plants. “Speak from the heart,” reads a framed sign on the wall. “Listen from the heart.” Employees show up in groups of about a dozen, sit cross-legged on black cushions, and take turns with the “talking piece,” a heart-shaped purple geode that gives the bearer the right to confidentially share deep thoughts.

This is the inner sanctum for what Snap calls “Council,” a sort of New Age corporate retreat that uses a technique Chief Executive Officer Evan Spiegel learned in childhood. It was also where I found myself on a Friday morning in July. Council meetings, I’d been told by the company’s communications chief, are “sacred.” They’re also a real-life example of what Spiegel wants people to do with his smartphone app, Snapchat: share intimately, without fear of judgment from the outside world.

Facebook to Remove Data-Security App From Apple Store

Facebook plans to pull its data-security app Onavo from the App Store after Apple complained that it violated its data collection policies, Deepa Seetharaman reports. Data from Onavo helps Facebook monitor the growth of competing social networks and has been seen as a major competitive asset:

Apple’s decision widens the schism between the two tech giants over privacy and is a blow to Facebook, which has used data gathered through the app to track rivals and scope out new product categories. The app, called Onavo Protect, has been available free download through Apple’s app store for years, with updates regularly approved by Apple’s app-review board.

The Tinder lawsuit is going to get nasty

My colleague Ashley Carman has new details on the latest Tinder co-founder lawsuit:

Meanwhile, a source close to Tinder says that Rad actually sold a great deal of stock following the merger between Tinder and Match Group, and suggested that the co-founder didn’t have much faith in the future of the dating app and that Match’s valuation was accurate. According to SEC filings, Rad exercised about half of his stock options in Match on August 4th and 6th, which Match repurchased for a net pay out of $94,413,552.06 based on a closing price of $18.89 per share. His other half was exercised on August 9th and he received net 816,805 shares of IAC stock.

Oculus Targeting Q1 2019 For Santa Cruz Release, Rift Ports Planned

Facebook’s next virtual-reality headset will be a high-end version of the recently released Oculus Go, David Jagneaux reports:

The headset is designed to function on its own without the need for a PC, similar to Oculus Go, but with cameras added for inside-out tracking of 6DOF head movement and two Oculus Touch-style controllers. The last time we went hands-on with Santa Cruz was at the Oculus Connect 4 conference last year. The release window lines up with the two year anniversary of the original Rift’s launch at the end of Q1 2016, March 28th.

Facebook will forego 30% share of Instant Games in-app revenue on Android

Facebook is trying to give a boost to Instant Games, which feel like they’re going nowhere, at least in the United States:

At the outset, Facebook said developers would receive 70 percent of the Instant Games revenue, with 30 percent going to Facebook. But on Android, developers also had to share 30 percent of their revenue with Google. In fact, Google took 30 percent of the total, and then Facebook took 30 percent of what was left. Developers were left getting only 49 percent of the total revenue on games they had created.

After evaluating this, Facebook has decided to roll back its revenue share, so the developers only have to pay Google on Android.

Posting Instagram Sponsored Content Is the New Summer Job

Move over lemonade stands, the hot new trend for teens is sponcon, Taylor Lorenz reports:

With “jobs you need to do a lot of training,” says Lucy, a 13-year-old in Pennsylvania who asked to be referred to by a pseudonym. “Then you have to, like, physically go out and do the job for hours a day. Doing this, you can make one simple post, which doesn’t take a while. That single post can earn you, like, $50.” Last month, she started posting brand-sponsored Instagrams for her more than 8,000 followers. So far, she says, she’s earned a couple hundred dollars.

How to Share an Instagram Account With Your Significant Other

Sharing an Instagram account with a lover sounds like a nightmare, but some lost souls are doing it, Emily Dreyfuss reports, who is now one of those souls. (Twist: so is Taylor Lorenz, who is featured in this story!)

Yes, I still feel a twinge of embarrassment about sharing an account with Seth sometimes. But so far, my tiny hang-up is the only real downside to our new joint-account life. If you’re considering it and you’re sensitive to the judgment of others, you should know that when I asked on Twitter whether anyone knew people who did this, the common response was “ew” and “I assume anybody who replies to this in the affirmative gets arrested.” But you know what? Lock me up, folks, because I love love and I love our joint Instagram account.

Launches

Facebook is working on mesh Wi-Fi to possibly bring to developing countries

Shannon Liao updates us on one of Facebook’s internet connectivity efforts, this one in Tanzania:

Facebook gave an update yesterday on its efforts to expand Express Wi-Fi, an app that lets unserved communities pay for internet service. The company is still working on efforts to reach the 3.8 billion people in the world who don’t have internet access, in order to grow its potential market.

Takes

It’s Too Late to Protect the 2018 Elections. But Here’s How the U.S. Can Prepare for 2020.

Alex Stamos, who just left his job as Facebook’s chief security officer, vents at the United States government for failing to do more to improve the nation’s cybersecurity defense mechanisms:

If the weak response of the Obama White House indicated to America’s adversaries that the U.S. government would not respond forcefully, then the subsequent actions of House Republicans and President Trump have signaled that our adversaries can expect powerful elected officials to help a hostile foreign power cover up attacks against their domestic opposition. The bizarre behavior of the chairman of the House Permanent Select Committee on Intelligence, Rep. Devin Nunes, has destroyed that body’s ability to come to any credible consensus, and the relative comity of the Senate Select Committee on Intelligence has not yet produced the detailed analysis and recommendations our country needs. Although by now Americans are likely inured to chronic gridlock in Congress, they should be alarmed and unmoored that their elected representatives have passed no legislation to address the fundamental issues exposed in 2016.

And finally …

Patrick Gerard has your algorithmic failure of the day:

Sorry Patrick!!

Talk to me

Send me tips, questions, comments, and alternate theories about violence against refugees: casey@theverge.com.

How average Facebook users in Germany inspired a wave of violence against refugees

Recently I ran into a well known tech CEO and asked him how he was feeling about social networks. (I am extremely fun at parties.) The CEO’s unequivocal response surprised me: “shut them down,” he said. His reasoning was simple: the networks undermine democracies in ways that cannot be fixed with software updates, he said. The only logical response, in his mind, was to end them.

Whether social networks can be fixed is the question looming over Amanda Taub and Max Fisher’s deeply unsettling new report in The New York Times. The report, based on academic research and bolstered by extensive on-the-ground reporting, finds a powerful link between Facebook usage and attacks on refugees in Germany:

Karsten Müller and Carlo Schwarz, researchers at the University of Warwick, scrutinized every anti-refugee attack in Germany, 3,335 in all, over a two-year span. In each, they analyzed the local community by any variable that seemed relevant. Wealth. Demographics. Support for far-right politics. Newspaper sales. Number of refugees. History of hate crime. Number of protests.

One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community — big city or small town; affluent or struggling; liberal haven or far-right stronghold — suggesting that the link applies universally.

The most striking data point in the piece: “wherever per-person Facebook use rose to one standard deviation above the national average,” the authors write, “attacks on refugees increased by about 50 percent.”

From there, the authors explore why this happens. They examine how Facebook promotes more emotional posts over mundane ones, distorting users’ sense of reality. Towns that had been relatively welcoming to immigrants eventually came to encounter an overwhelming tide of anti-refugee sentiment when they opened the Facebook app.

Much of this activity is driven by so-called “superposters,” who flood the service with negative sentiment. This asymmetry of passion makes it appear as if refugees have less support than they actually do, which in turn inspires more people to gang up against them.

One of the most notable features of the study, which you can read in its entirety here, is how it determines that Facebook is uniquely responsible for the surge of anti-immigrant violence in Germany. Here are Taub and Fisher again:

German internet infrastructure tends to be localized, making outages isolated but common. Sure enough, whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.

And they dropped by the same rate at which heavy Facebook use is thought to boost violence. The drop did not occur in areas with high internet usage but average Facebook usage, suggesting it is specific to social media.

Also notable: these attacks happened despite strict laws against hate speech in Germany, which require Facebook to take any offending posts down within 24 hours of being reported. As the authors note, the posts driving the violence largely do not qualify as hate speech. The overall effect of standard political speech has been to convince large swathes of the population that Germany is beset by a foreign menace — which triggered a political crisis in the country earlier this year.

In New York, Brian Feldman says Facebook has two choices:

It can do more to limit user speech on posts that are not explicitly hateful but couched in the rhetoric of civil discussion — the types of posts that seem to fuel anti-refugee violence. Or it can tweak its distribution mechanisms to minimize overall user engagement with Facebook, which would also reduce the amount of ad money it collects.

Surprisingly, Facebook declined to comment on the study or its implications. But even as it was still reverberating around the internet, the company was getting ready to answer for another set of concerns: four new influence campaigns linked to Russia and Iran. From my story:

Facebook removed more pages today as a result of four ongoing influence campaigns on the platform, taking down 652 fake accounts and pages that published political content. The campaigns, whose existence was first uncovered by the cybersecurity firm FireEye, have links to Russia and Iran, Facebook said in a blog post. The existence of the fake accounts was first reported by The New York Times.

“These were networks of accounts that were misleading people about who they were and what they were doing,” CEO Mark Zuckerberg said in a call with reporters. “We ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.

People indeed ought to be able to trust the connections they make on Facebook. But between the study of Facebook’s effects on Germany and news of multiple ongoing state-sponsored attacks on the service, it was hard to say where that trust could come from.

“When you operate a service at the scale of the ones that we do, you’re going to see a lot of the good things, and you’re going to see people abuse the service in every way possible as well,” Zuckerberg told reporters. And yet the thing that troubles me most today wasn’t the people abusing the service. It was the Germans using Facebook just as it was intended to be used.

Democracy

Facebook is rating the trustworthiness of its users on a scale from zero to one

Facebook relies on user reports to determine whether a post is false or misleading. But users themselves can seek to mislead Facebook by falsely reporting credible information. And so Facebook has begun giving users a score to help them weight reports. It’s a bit less dramatic than it sounded when the story first hit — this is not an equivalent to, say, an Uber rating or Reddit karma — but it does seem like a good and useful thing. Elizabeth Dwoskin reports:

A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.

Facebook Pushes Back on Reporting About its User Trust Ranking

Facebook didn’t seem to like the Post story::

“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading. What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system,” a Facebook spokesperson wrote via email, “The reason we do this is to make sure that our fight against misinformation is as effective as possible.”

Facebook Is Removing More Than 5,000 Ad Targeting Options To Prevent Discrimination

After a series of reports by ProPublica and others about how Facebook’s ad platform can enable discrimination, the company said it would remove thousands of targeting capabilities, Alex Kantrowitz reports:

Facebook’s removal of the targeting options comes amid an investigation from the US Department of Housing and Urban Development, which filed a complaint last week alleging Facebook had enabled discriminatory housing practices with its ad targeting options. The complaint began a process that could eventually lead to a federal lawsuit.

On the frontline of India’s WhatsApp fake news war

Soutik Biswas examines how India is working to educate young people about viral misinformation on WhatsApp, in the hopes that it will reduce the number of murders inspired by hoaxes on the platform:

To combat this, district officials have now begun 40-minute-long fake news classes in 150 of its 600 government schools.

Using an imaginative combination of words, images, videos, simple classroom lectures and skits on the dangers of remaining silent and forwarding things mindlessly, this initiative is the first of its kind in India. This is a war on disinformation from the trenches, and children are the foot soldiers.

New Russian Hacking Targeted Republican Groups, Microsoft Says

Russia is now targeting conservative think tanks who favor stronger sanctions against the country, according to new research from Microsoft, David E. Sanger and Sheera Frenkel report:

The goal of the Russian hacking attempt was unclear, and Microsoft was able to catch the spoofed websites as they were set up.

But Mr. Smith said that “these attempts are the newest security threats to groups connected with both American political parties” ahead of the 2018 midterm elections.

Jack Dorsey On Deleting Tweets, Banning Trump, And Whether An Unbiased Twitter Can Exist

Your Jack Dorsey interview of the day is with Buzzfeed’s Charlie Warzel. He offers lots more big-picture talking about “incentives” and “conversation,” and little in the way of concrete plans. But I’m glad Warzel suggested to Dorsey that he is getting played by conservatives crying wolf about shadow bans:

Dorsey: I want to acknowledge my bias and I also want to acknowledge there’s a separation between me and our company and how we act. We need to show that in our, we need to be a lot more transparent, we need to show that in our product, we need to show that in our policy and we need to show that in our enforcement and I think in all three we have, but it bears repeating again and again and again. The reason we’re talking with more conservatives is just in the past we haven’t really done much. At least I haven’t.

Twitter Gets Powerful Win in “Must-Carry” Lawsuit–Taylor v. Twitter

Eric Goldman updates us on a case in which white supremacists sued Twitter in an effort to prevent the company from banning them. An appeals court ruled that Twitter is protected from the suit by section 230 of the Communications Decency Act.

Number of Third-Party Cookies on EU News Sites Dropped by 22% Post-GDPR

What if GDPR …. is good? Catalin Cimpanu offers a data point:

The number of tracking cookies on EU news sites has gone down by 22% according to a report by the Reuters Institute at the University of Oxford, who looked at cookie usage across EU news sites in two phases, in April 2018 and July 2018, pre and post the introduction of the new EU General Data Protection Regulation (GDPR). […]

“We may be observing a kind of ‘housecleaning’ effect. Modern websites are highly complex and evolve over time in a path-dependent way, sometimes accumulating out-of-date features and code,” researchers said. “The introduction of GDPR may have provided news organizations with a chance to evaluate the utility of various features, including third-party services, and to remove code which is no longer of significant use or which compromises user privacy.”

Line is another chat app rife with spam, scams, and bad information. The volunteer-supported Cofacts is fact-checking them in the open

Kirsten Han profiles Cofacts, a collaborative fact-checking service that uses bots to check information that’s spreading virally on popular Asian messaging app Line. The bot has received more than 46,000 messages, of which chatbot was answered 35,180 automatically:

Any interested volunteers can log into the database of submitted messages and start evaluating the messages, using the Cofacts form. Cofacts offers step-by-step instructions for those who can’t figure out how to use the platform, as well as a set of clear editorial guidelines that helps volunteers weed out uncheckable messages or ones that are “personal opinion,” and what types of reliable sources they can use to back up their fact-checking work.

Based on data collected by the Cofacts team on the messages they’ve received so far, the misinformation debunked on the platform can range from fake promotions and medical misinformation to false claims about government policies.

How misinformation spreads on Line — one of the most popular messaging apps in Southeast Asia

Speaking of Line, Daniel Funke looks at how public accounts on the service grow big by promising users free stickers and then pivoting to disinformation once they get a large audience. Many of the influence campaigns appear to advertise health care products of dubious value:

Many of the top misinforming accounts on the app publish accurate tips about things like lowering blood pressure alongside spammy ads for things like detoxifying foot pads — and Anutarasoat said channels regularly profit from it.

“The products that some of these networks want to sell, (they’re) not harmful products, but not useful like they advertise — like a fake website that’s selling medicine that can reduce blood pressure, and they’re targeting it for older people who have high blood pressure problem,” he said. “They create a convincing website that has a picture of a doctor and a picture of a witness. In some websites, they actually fake that it is a website from public health ministries.”

Elsewhere

Say ‘Aloha’: A closer look at Facebook’s voice ambitions

Drawing on some new information from researcher Jane Manchun Wong, Josh Constine reminds us that Facebook’s home speaker is still in development.

Schools Are Mining Students’ Social Media Posts for Signs of Trouble

Tom Simonite examines the state of social media monitoring in schools and finds several companies vying for district dollars with a promise of protecting schools from attack. But their value is unclear, and they could have significant downsides:

There’s little doubt that students share information on social media school administrators might find useful. There is some debate over whether — or how—it can be accurately or ethically extracted by software.

Amanda Lenhart, a New America Foundation researcher who has studied how teens use the internet, says it’s understandable schools like the idea of monitoring social media. “Administrators are concerned with order and safety in the school building and things can move freely from social media—which they don’t manage—into that space,” she says. But Lenhart cautions that research on kids, teens, and social media has shown that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.

Slack raises $427 million, now valued above $7 billion

Sometimes I wonder whether the Time Well Spent movement will ever affect the famously noisy, all-consuming office chat app Slack. The answer so far — no, not at all!

Launches

Tinder is rolling out a college-only service, Tinder U

My colleague Ashley Carman reports on the launch of Tinder U, a version of the dating app just for college students. I imagine this will be quite popular, although it may turn out that Tinder itself is good enough.

Tinder’s marketing frames the service as ideal for finding a study buddy or someone to hang out with on the quad. Also, if Tinder can build in a new dedicated user base of 18-year-olds, it can also start converting them to paid users sooner. Facebook employed a similar strategy when it first launched. The platform required a .edu email address to build out a loyal college following before opening widely a few years later. The opposite is happening with Tinder: everyone can use it, but college kids now might want a safe haven from creepy older people.

Google is developing an experimental podcast app called Shortwave

My colleague Russell Brandom finds evidence of a new podcast app from Google:

Nothing in the trademark filing specifies the kind of audio being accessed, but a Google representative said the focus of the app was on spoken word content. There is little public information about the app, although Google has played with smart captioning, translation, and other AI-assisted features in previous podcast products.

Takes

Advertising is obsolete – here’s why it’s time to end it

Ramsi Woodcock makes a sweeping case against advertising, saying the internet has made its core function of consumers obsolete, and saying it could even violate antitrust laws. This is a big take but a well considered one:

The courts have long held that Section 2 of the Sherman Act prohibits conduct that harms both competition and consumers, which is just what persuasive advertising does when it cajoles a consumer into buying the advertised product, rather than the substitute the consumer would have purchased without advertising.

That substitute is presumably preferred by the consumer, precisely because the consumer would have purchased it without corporate persuasion. It follows that competition is harmed, because the company that made the product that the consumer actually prefers cannot make the sale. And the consumer is harmed by buying a product that the consumer does not really prefer.

Facebook and Twitter aren’t liberal or conservative. They’re capitalist.

Will Oremus listens to the Radiolab episode I wrote about yesterday and examines it in the context of charges of bias against platforms:

Donald Trump, Ted Cruz, and other Republicans probably won’t buy Dorsey’s claim that he tries to keep his biases out of the company’s decision-making, particularly the next time an Alex Jones gets the boot. Nor will most liberals believe that he isn’t bending over backward to appease the hard right, especially the next time an Alex Jones isn’t ejected from the platform. When a company that shapes the flow of online political speech is making high-stakes decisions about who can talk and who can’t, it’s hard to accept that those decisions are the product of a jury-rigged rulebook or algorithm rather than political calculations or a secret agenda.

But it’s worth remembering, with these controversies, that social media companies do have an agenda, and it isn’t secret. Their agenda is to keep making money, and when it comes to high-stakes decisions about who can say what online, the most lucrative option is often to play dumb

And finally …

Donald Trump Jr.’s Instagram Is a Shakespearean Tragedy

The president’s eldest son is just like us — which is to say, he reads the comments. Especially on Instagram, reports Eve Peyser:

He’ll respond to anyone—he frequently ignores comments from verified accounts, instead replying to messages from random accounts, which suggests that he reads all the comments. Which has to got to hurt. But when replying to these so-called “whiny libs,” Don Jr. doesn’t hold back, chiding them for their low follower counts, and/or accusing them of being robots.

Something tell me Don may find himself receiving more comments than usual today.

Talk to me

Send me tips, questions, comments, academic studies: casey@theverge.com.

The 8-year-olds hacking state election websites

It’s been a rather grim week in the social-media-and-democracy cinematic universe, so let’s end on a positive note … that starts on a grim note!

“Voting systems in the United States are so woefully hackable, even an 8-year-old could do it.” So begins Issie Lapowsky’s look at a competition to be hosted next week at Def Con, the venerable hacking conference in Las Vegas. The competition in question is being hosted by the Democratic National Committee, who you might remember from such previous hacks as the 2016 presidential election.

Here’s Lapowsky on how it’s going to work:

The contest will include children, ages 8 to 16, who will be tasked with penetrating replicas of the websites that secretaries of state across the country use to publish election results. They’ll vie for $2,500 in prize money, $500 of which will come from the DNC and be awarded to the child who comes up with the best defensive strategy for states around the country.

The eye-popping reason that the Democrats have turned to children to hack them? “State election sites are so deeply flawed, Braun says, no adult hackers would be interested in cracking them. ‘The hackers would laugh us off the stage if we asked them to do this.’”

Ha …………………………………………………… ha?

In any case, this story is notable for at least three reasons. One, our focus — particularly around here — on the ongoing influence campaigns on social media can distract from the ongoing attacks on our actual election infrastructure. Both are worthy of your attention, even if you’re usually only going to get the former around here.

Two, the Democrats’ new security people come from the world of social media. Raffi Krikorian and Bob Lord both worked on security issues at Twitter, among stints at other big tech companies, before arriving at the DNC.

Three, this story serves as a nice reminder that solving our broken-reality crisis will need to involve average people. Tech companies and national governments have a giant role to play, but there’s plenty of work to go around for everyone.

Even the 8-year-olds.

Correction, August 6th: The headline of this article has been updated to clarify that the 8-year-olds have been asked to hack replicas of state election websites, not voting machines.

Democracy

Google Struggles to Contain Employee Uproar Over China Censorship Plans

Googlers are upset about the big (and secret) new push into China, Ryan Gallagher reports:

Company managers responded by swiftly trying to shut down employees’ access to any documents that contained information about the China censorship project, according to Google insiders who witnessed the backlash.

“Everyone’s access to documents got turned off, and is being turned on [on a] document-by-document basis,” said one source. “There’s been total radio silence from leadership, which is making a lot of people upset and scared. … Our internal meme site and Google Plus are full of talk, and people are a.n.g.r.y.”

The The Surprising Truth About How Humans Determine Right and Wrong

Max Fisher and Amanda Taub write about research suggesting that we derive our morality from the people around us, and what that means for big social networks:

It especially raises the stakes for how we organize on social media. Sites like Facebook scramble the ways that we relate to one another. They replace our traditional person-to-person social networks with artificial, algorithm-driven networks meant to maximize the amount of time we spend on the site.

That doesn’t necessarily mean that they’re worse for our ability to collectively determine morality. But it definitely doesn’t mean that they’re better, either. We’re only barely beginning to understand the ways that social media can augment up things like misinformation, polarization, filter bubbles and extremism. Could Facebook also disrupt the processes by which we determine right from wrong — which is, after all, often a social act? How would that change our morality? Our tolerance of violence? Or our likelihood to commit it?

Here’s why the U.S. military’s tweets are so bad

Caroline Haskins explores the military’s guidelines for tweeting and finds their content to be rather too jovial given the circumstances:

“Balance ‘fun’ with ‘medicine,’” the handbook reads — in which case “medicine” refers to military promotional materials, or breaking news events like a successful military operation. “It is important to post command messages and organizational information, but try to keep the page entertaining enough for people to want to follow it. Don’t be afraid to have fun by posting interesting links, or asking trivia questions. Try posting a photo of the day, or asking a weekly question.”

Elsewhere

Snap Holds Very Brief Shareholder Meeting

Congratulations to Snap investors:

Snap held what may be the shortest annual shareholder meeting by a U.S. public company in history — not that anyone necessarily keeps records of that feat. Snap’s meeting lasted just two minutes and 46 seconds, an accomplishment that seems fitting for a company that pioneered disappearing messages.

Patreon creators scramble as payments are mistakenly flagged as fraud

Scary time to be reliant on Patreon income, my colleague Megan Farokhmanesh reports.

Creators logging on today found that some of their July payments have been affected due to what appears to be a combination of banking issues and changes to internal company operations. While it’s unclear why certain Patreon users (and not others) have been impacted by this problem, many creators say they’ve not gotten any support or answers from the company despite reaching out directly. And though it’s typical for some payments to be declined, the scope of the current issue is concerning to members of the community.

What Is ‘Gang weed,’ the Joker Meme about Society

Here is a bonkers explainer from Brian Feldman about gang weed, and rarely has a meme needed an explainer more. Feldman describes it as “a parody of aggrieved ‘stoner nihilist gentlemen gamers’ and people who see an ironic meme as a way to disguise their true feeling on the matter.” I barely understood any of it but enjoyed the explainer very much.

YouTube headquarters expansion plans show Google see a booming future for video

YouTube has big expansion plans in San Bruno:

According to city managers, YouTube favors a proposal that would add 2.3 million square feet of office space and eventually bring more than 10,000 new jobs to the area. Not all those jobs or space would belong to Google but David Woltering, the community development director for the city of San Bruno, said “The vast majority of it would be YouTube’s.”

Wax Mark Zuckerberg at San Francisco Madame Tussauds is in the lobby for free selfies

Business Insider discovers a wax Mark Zuckerberg statue in San Francisco.

Launches

Facebook has started internal testing of its dating app

After F8, some smart people I know speculated to me that Facebook Dating would turn out to be vaporware. Maybe so, but in the meantime it’s testing internally.

Facebook launches Digital Literacy Library to help young people use the internet responsibly

My colleague Dami Lee notes that Facebook’s new “Digital Literacy Library” contains no information for spotting hoaxes or misinformation on social media.

Maisie Williams shows off Daisie, an app for artistic collaboration

Arya made a social app.

Takes

The Lasting Trauma of Alex Jones’s Lies

Megan Garber criticizes the current state of “information pollution”:

Competing truths — “alternative facts” — are no longer the primary threat to American culture; competing lies are. Everything was possible and nothing was true: Conspiracies now smirk and smog in the air, issued from the giant smokestacks at InfoWars and The Gateway Pundit and the White House itself. Hannah Arendt warned of the mass cynicism that can befall cultures when propaganda is allowed to proliferate among them; that cynicism is here, now. And it is accompanied by something just as destructive: a sense of pervasive despair. Americans live in a world of information pollution—and the subsequent tragedy of this new environmental reality is that no one has been able to figure out a reliable method of clearing the air.

And finally …

Swarms of Instagrammers force a Canadian sunflower farm to ban all visitors

Earlier this week, we ended with @insta_repeat, the Instagram account that features instances where everyone takes the same exact photo. Now here’s a story about what happens when everyone takes the same photo, which is that it ruins Canadian sunflower farms:

As The Globe and Mail reports, Bogle Seeds in Hamilton, Ontario had to close down its fields to all visitors following a viral image that lead to a massive increase in foot traffic of people shooting pictures of its sunflower fields. In late July, the farm was open to everyone, with the owners charging an entry fee of $7.50 to people who wanted to visit the brightly colored flowers. At first, the crowds were manageable, but by July 28th, everything had changed. After pictures of the farm went viral, an estimated 7,000 cars lined up on the roads leading to the farm.

The next time someone blasts you for taking pictures of yourself, tell them you’re supporting local businesses. Save a farm — take a selfie.

Talk to me

Send me tips, comments, questions, and weekend plans: casey@theverge.com.

The 8-year-olds hacking our voting machines

It’s been a rather grim week in the social-media-and-democracy cinematic universe, so let’s end on a positive note … that starts on a grim note!

“Voting systems in the United States are so woefully hackable, even an 8-year-old could do it.” So begins Issie Lapowsky’s look at a competition to be hosted next week at Def Con, the venerable hacking conference in Las Vegas. The competition in question is being hosted by the Democratic National Committee, who you might remember from such previous hacks as the 2016 presidential election.

Here’s Lapowsky on how it’s going to work:

The contest will include children, ages 8 to 16, who will be tasked with penetrating replicas of the websites that secretaries of state across the country use to publish election results. They’ll vie for $2,500 in prize money, $500 of which will come from the DNC and be awarded to the child who comes up with the best defensive strategy for states around the country.

The eye-popping reason that the Democrats have turned to children to hack them? “State election sites are so deeply flawed, Braun says, no adult hackers would be interested in cracking them. ‘The hackers would laugh us off the stage if we asked them to do this.’”

Ha …………………………………………………… ha?

In any case, this story is notable for at least three reasons. One, our focus — particularly around here — on the ongoing influence campaigns on social media can distract from the ongoing attacks on our actual election infrastructure. Both are worthy of your attention, even if you’re usually only going to get the former around here.

Two, the Democrats’ new security people come from the world of social media. Raffi Krikorian and Bob Lord both worked on security issues at Twitter, among stints at other big tech companies, before arriving at the DNC.

Three, this story serves as a nice reminder that solving our broken-reality crisis will need to involve average people. Tech companies and national governments have a giant role to play, but there’s plenty of work to go around for everyone.

Even the 8-year-olds.

Democracy

Google Struggles to Contain Employee Uproar Over China Censorship Plans

Googlers are upset about the big (and secret) new push into China, Ryan Gallagher reports:

Company managers responded by swiftly trying to shut down employees’ access to any documents that contained information about the China censorship project, according to Google insiders who witnessed the backlash.

“Everyone’s access to documents got turned off, and is being turned on [on a] document-by-document basis,” said one source. “There’s been total radio silence from leadership, which is making a lot of people upset and scared. … Our internal meme site and Google Plus are full of talk, and people are a.n.g.r.y.”

The The Surprising Truth About How Humans Determine Right and Wrong

Max Fisher and Amanda Taub write about research suggesting that we derive our morality from the people around us, and what that means for big social networks:

It especially raises the stakes for how we organize on social media. Sites like Facebook scramble the ways that we relate to one another. They replace our traditional person-to-person social networks with artificial, algorithm-driven networks meant to maximize the amount of time we spend on the site.

That doesn’t necessarily mean that they’re worse for our ability to collectively determine morality. But it definitely doesn’t mean that they’re better, either. We’re only barely beginning to understand the ways that social media can augment up things like misinformation, polarization, filter bubbles and extremism. Could Facebook also disrupt the processes by which we determine right from wrong — which is, after all, often a social act? How would that change our morality? Our tolerance of violence? Or our likelihood to commit it?

Here’s why the U.S. military’s tweets are so bad

Caroline Haskins explores the military’s guidelines for tweeting and finds their content to be rather too jovial given the circumstances:

“Balance ‘fun’ with ‘medicine,’” the handbook reads — in which case “medicine” refers to military promotional materials, or breaking news events like a successful military operation. “It is important to post command messages and organizational information, but try to keep the page entertaining enough for people to want to follow it. Don’t be afraid to have fun by posting interesting links, or asking trivia questions. Try posting a photo of the day, or asking a weekly question.”

Elsewhere

Snap Holds Very Brief Shareholder Meeting

Congratulations to Snap investors:

Snap held what may be the shortest annual shareholder meeting by a U.S. public company in history — not that anyone necessarily keeps records of that feat. Snap’s meeting lasted just two minutes and 46 seconds, an accomplishment that seems fitting for a company that pioneered disappearing messages.

Patreon creators scramble as payments are mistakenly flagged as fraud

Scary time to be reliant on Patreon income, my colleague Megan Farokhmanesh reports.

Creators logging on today found that some of their July payments have been affected due to what appears to be a combination of banking issues and changes to internal company operations. While it’s unclear why certain Patreon users (and not others) have been impacted by this problem, many creators say they’ve not gotten any support or answers from the company despite reaching out directly. And though it’s typical for some payments to be declined, the scope of the current issue is concerning to members of the community.

What Is ‘Gang weed,’ the Joker Meme about Society

Here is a bonkers explainer from Brian Feldman about gang weed, and rarely has a meme needed an explainer more. Feldman describes it as “a parody of aggrieved ‘stoner nihilist gentlemen gamers’ and people who see an ironic meme as a way to disguise their true feeling on the matter.” I barely understood any of it but enjoyed the explainer very much.

YouTube headquarters expansion plans show Google see a booming future for video

YouTube has big expansion plans in San Bruno:

According to city managers, YouTube favors a proposal that would add 2.3 million square feet of office space and eventually bring more than 10,000 new jobs to the area. Not all those jobs or space would belong to Google but David Woltering, the community development director for the city of San Bruno, said “The vast majority of it would be YouTube’s.”

Wax Mark Zuckerberg at San Francisco Madame Tussauds is in the lobby for free selfies

Business Insider discovers a wax Mark Zuckerberg statue in San Francisco.

Launches

Facebook has started internal testing of its dating app

After F8, some smart people I know speculated to me that Facebook Dating would turn out to be vaporware. Maybe so, but in the meantime it’s testing internally.

Facebook launches Digital Literacy Library to help young people use the internet responsibly

My colleague Dami Lee notes that Facebook’s new “Digital Literacy Library” contains no information for spotting hoaxes or misinformation on social media.

Maisie Williams shows off Daisie, an app for artistic collaboration

Arya made a social app.

Takes

The Lasting Trauma of Alex Jones’s Lies

Megan Garber criticizes the current state of “information pollution”:

Competing truths — “alternative facts” — are no longer the primary threat to American culture; competing lies are. Everything was possible and nothing was true: Conspiracies now smirk and smog in the air, issued from the giant smokestacks at InfoWars and The Gateway Pundit and the White House itself. Hannah Arendt warned of the mass cynicism that can befall cultures when propaganda is allowed to proliferate among them; that cynicism is here, now. And it is accompanied by something just as destructive: a sense of pervasive despair. Americans live in a world of information pollution—and the subsequent tragedy of this new environmental reality is that no one has been able to figure out a reliable method of clearing the air.

And finally …

Swarms of Instagrammers force a Canadian sunflower farm to ban all visitors

Earlier this week, we ended with @insta_repeat, the Instagram account that features instances where everyone takes the same exact photo. Now here’s a story about what happens when everyone takes the same photo, which is that it ruins Canadian sunflower farms:

As The Globe and Mail reports, Bogle Seeds in Hamilton, Ontario had to close down its fields to all visitors following a viral image that lead to a massive increase in foot traffic of people shooting pictures of its sunflower fields. In late July, the farm was open to everyone, with the owners charging an entry fee of $7.50 to people who wanted to visit the brightly colored flowers. At first, the crowds were manageable, but by July 28th, everything had changed. After pictures of the farm went viral, an estimated 7,000 cars lined up on the roads leading to the farm.

The next time someone blasts you for taking pictures of yourself, tell them you’re supporting local businesses. Save a farm — take a selfie.

Talk to me

Send me tips, comments, questions, and weekend plans: casey@theverge.com.

Fake news evolved into fake events, and the consequences are scary

Some days, it seems like any number of topics might lead The Interface. Other days, nearly every major outlet in our orbit writes a version of the same story. Thursday was one of the latter: Facebook’s removal of probably-Russian disinformation has tripped up scores of real-life American activists, causing legitimate protests to be removed from the service, and we’re only beginning to consider the implications.

The activists were caught up in Facebook’s announcement earlier this week that it had removed 32 pages, with more than 290,000 followers, after discovering that they were part of a secret campaign to influence American politics. Facebook reported at the time that these accounts were harder to find than the Russian agents of the 2016 election campaign. The people who created them took creative steps to make their accounts look authentic. It’s one reason why Facebook can’t say definitively that the current influence campaign is Russian in origin, though there are strong signals that it is.

One way fake accounts can look authentic is by associating with real ones. According to activists in the above-linked stories, that’s just what happened here. When Facebook discovered the subterfuge, it removed public events created by the fakers, even though thousands of Americans had registered to attend.

You can understand why Facebook would remove events and posts that had been created as part of a mind-warping influence campaign — and you can also probably understand why protesters are so upset. Here’s Tony Romm, Elizabeth Dwoskin and Eli Rosenberg in the Post:

Facebook has “delegitimized our whole event — and all the work that folks across the D.C. area have put a lot of time and effort into,” said Caleb-Michael Files, an organizer of the March to Confront White Supremacy, a group that was organized after the Charlottesville protests, and a co-host of the counterprotest event page. He said he was much angrier at the social network than at Russia. “Russians might have been there, but Russians are not creating and invoking these feelings. These are real feelings, not Internet-created feelings.”


Two of the events removed by Facebook during its investigation

At TechCrunch, Taylor Hatmaker has the story of Andrew Batcher, a Washington-based activist who’s part of an anti-hate group called Shut It Down DC. Batcher’s group became a co-host of a planned protest of the sequel to last year’s deadly Unite the Right rally, which is scheduled for later this month. The event was created by someone who was hiding their identity, but Batcher’s group filled it with legitimate posts:

“When we started organizing we talked about making a Facebook page and saw that this already existed,” Batcher said. “It happens pretty regularly in DC knowing how many major events take a place here.

“We asked to be made co-hosts of the event and we put our stuff up on it basically,” Batcher said. That included video calls to action, photos and other content, including the event description. “Everything that was taken down was ours.”

As a strategy for sowing chaos, fake events appear to be every bit the equal of fake news. As Sam Woolley, director of digital intelligence at the think tank Institute for the Future, asked the Journal: “What’s real grass-roots activity versus fake grass-roots activity?” he asked.

Five months ago, Charlie Warzel wrote of the threat of an Infocalypse: a moment when fact can no longer reliably, separated from fiction. A world in which every protest comes under suspicion of having been organized by shadowy, unseen forces would seem to herald the arrival of such a moment. As Kevin Roose put it in the Times:

A side effect of the disinformation campaigns is that they make social media as a whole seem inherently untrustworthy, and give fodder to those who want to cast doubt on the legitimacy of authentic movements. Already, some partisans have adopted the tactic of sowing doubt about internet-based movements by painting their opponents as Russian trolls or agents of a foreign-influence campaign.

This type of suspicion appears likely to grow, as influence campaigns get harder and harder to distinguish from authentic activity.

For activists, there are clear lessons to be learned: Be careful whose online protests you promote. Insist on a video chat with that new suspiciously eager protester who wants you to be an administer on their page. When they tell you they’re an American, ask to see the receipts.

For Facebook, the evolution of events into a major new attack surface has generated another thicket of difficult choices. Remove events and their related posts too aggressively and you’re stifling the speech you have promised to protect; be too lax in your enforcement and invite regulation and the continued decline of democracy. And whichever way you lean on a given day, loud voices will be there to tell you that you’re doing it all completely wrong.

Fake news can sow division and make you doubt the legitimacy of the articles you’re reading. Fake events go a step further, making you doubt the motives of everyone around you. They distract you from your objective and sap your energy. As a weapon of disinformation, they can be devilishly effective.

I started writing a daily newsletter last year in part because I wanted to see how disinformation would evolve for this year’s midterm elections. The emergence of fake events as the new fake news represents a significant new mutation. And it’s not clear that any of us are prepared for what comes next.

Democracy

Russia’s Other Troll Team

The Digital Forensics Research Lab does a close reading of special counsel Robert Mueller’s indictments of Russian agents and concludes that there’s a second group waging information warfare on social networks beyond the Kremlin’s famous Internet Research Agency:

The GRU campaign appears to have had two main goals: to mobilize American opinions, especially African-American opinions, against Clinton, and to spread propaganda which served Russian military interests.

It was much smaller and more focused than the Internet Research Agency operation. It seems to have worked much more closely with the hacking units, although its ability to amplify their leaks was limited. Above all, unlike the troll farm, it was conducted by serving officers in Russian military intelligence.

CEO of Twitter Jack Dorsey On Shadow Banning Allegations: “It’s Not Acceptable For Us To Create A Culture Like That”

Jack Dorsey sat down with Fox News Radio’s Guy Benson to address the ongoing, bad-faith allegations that Twitter is “shadow banning” conservatives.

DORSEY: The net of this is we need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can actually see how they work. This is not easy for anyone to do. In fact there’s a whole field of research in AI called ‘explainability’ that is trying to understand how to make algorithms explain how they make decisions in this criteria. We are subscribed to that research. We’re making sure that we can help lead it and fund it, but we’re a far way off. So in the meantime we just need to make sure that we’re pushing ourselves to explain exactly how these things work. How we’re making decisions. Where we need to make decisions as humans vs where the algorithms make decisions based on behaviors and signals.

Scoop: GOP House leader says Twitter CEO Jack Dorsey should testify

The GOP House leader wants to hold yet another hearing over “allegations that the platform limits the reach of some conservative accounts.” At this point, it’s hard to call these hearings anything other than an intimidation campaign against the social networks.

Google Developing News App for China

Wayne Ma and Juro Osawa have the scoop on a new, heavily censored “news” app that Google is developing. I am extremely interested in how Google is going to message its headlong rush into the Chinese market, where it will routinely be asked to aid and abet an authoritarian regime, often in the service of quashing dissent:

Google has been working on the app since last year and had been meeting with Chinese regulators to discuss the project, the people said. It is also preparing a mobile app for internet search in China that will comply with local censorship laws, an effort first reported Wednesday by The Intercept. The projects are part of an initiative code-named Dragonfly that marks a reversal for Google, which shut down its search engine in China eight years ago in dramatic fashion due to a growing crackdown on internet content by government authorities.

Elsewhere

Instagram CEO Kevin Systrom on winning the Stories war

Instagram’s clone of Snapchat stories turned 2 on Wednesday, and CEO Kevin Systrom took a short victory lap in the press. I really enjoy talking to Systrom — he’s wickedly smart, a very good listener, and makes no effort to hide his competitive streak. Sadly, our time together ended before he could make much news, but he did respond to last week’s investor freakout over the underperformance of stories advertising:

“Every brand-new format never monetizes as well as established formats,” he says. “That was true for online advertising for a long time. That was true of mobile advertising. There’s a certain maturity that happens over time, both with advertisers learning how to utilize the format, and we as a company are learning how to optimize the format so it delivers the most value to advertisers.”

The popular Musical.ly app has been rebranded as TikTok

Musical.ly was a big brand, with more than 200 million registered users around the world. The Chinese company Bytedance bought it last year, and in a head-scratcher, is eliminating the Musical.ly brand and collapsing it into a larger app it owns called TikTok. It seems risky and wasteful, but I know basically nothing about the Chinese social-media market. (Educate me!)

PSA: Automatic cross-posting of tweets to Facebook no longer works as of today

I can’t imagine anyone ever particularly enjoyed seeing tweets automatically posted to Twitter. “Wsers will instead have to copy a tweet’s URL if they want to share a tweet to Facebook going forward,” Sarah Perez reports. Don’t do this either!

Facebook kills automatic WordPress publishing to Profiles

Speaking of automatic posting, you can no longer post your WordPress blogs to Facebook. If you were doing this and it was working for you, please DM me.

The Next Step in our Journey to Help Local News Publishers

Facebook announced $4.5 million new money for programs that support news publishers. Most of the money goes to teach local publishers how to build digital subscriptions businesses. They’ll take it!

Launches

Facebook is making its first serious move to monetize WhatsApp

Due to an oversight, I forgot to include this important item from my colleague Shannon Liao in yesterday’s newsletter. Businesses can now pay to message users, opening up WhatsApp’s first real potential source of revenue — assuming anyone wants to interact with businesses in this way. Given the way that’s played out on Messenger, I’m skeptical. But I was also skeptical that WhatsApp’s brutalist take on Snapchat stories would succeed, and it’s now the most-used such format in the world. So I would believe anything about WhatsApp’s revenue opportunities, basically.

Facebook finally launches playable ads, improves game monetization

Mobile game advertisers can now create ads that you can play without first installing the game. Useful!

Takes

The Expensive Education of Mark Zuckerberg and Silicon Valley

Kara Swisher (disclosure: she is a friend and a personal hero and last night we went to go see Mission Impossible: Fallout together) makes her debut as a New York Times opinion columnist with a timely reflection on how social networks are belatedly waking up to the fact that they are the new “arms dealers”:

They have weaponized social media. They have weaponized the First Amendment. They have weaponized civic discourse. And they have weaponized, most of all, politics.

Which is why malevolent actors continue to game the platforms and why there’s still no real solution in sight anytime soon, because they were built to work exactly this way. And ever since, they have grown like some very pernicious kudzu and overtaken their inventors’ best efforts at control. Simply put, the inventors became overwhelmed by their own creations, which led to what I can only describe as a casual negligence, which led to where we are now.

And finally …

Comcast wants you to use your phone less, then switch to its network

Part of me feels like I should have been prepared for the Comcast corporation (disclosure: an investor in Vox Media) to see the Time Well Spent movement as a content marketing opportunity, and create a web page promoting a “phone cleanse” that also serves as an advertisement for buying additional Comcast services.

Reader: I was not prepared:

Also, it should be noted that there’s a hard copy book of the phone cleanse. I don’t know why. But it exists, and you can tweet Xfinity Mobile to get one shipped to you for free.

The only cleanse I’ll be doing after reading this web page is a shower.

Talk to me

Send me questions, tips, encouragement, and fake invitations: casey@theverge.com

Congress just showed us what comprehensive regulation of Facebook would look like

As Congress has paid increasing attention to social networks over the past year, a recurring theme in the coverage has been how little lawmakers appear to understand them. The first Facebook hearing, which was tied to Congress’ investigation of Russian interference in the 2016 election, played as pure theater. At a subsequent hearing, senators at least asked better questions.

But despite several more go-rounds, both here and abroad, it has been unclear what lawmakers intend to do about any of it. Mark Zuckerberg is on the record saying he supports certain kinds of regulation. But so far, it hasn’t been clear what aggressive regulation of Facebook would even look like.

It’s now much clearer — or rather, it would be clear, in a world in which Democrats had the power to regulate. On Monday, Axios’ David McCabe published a fascinating policy paper from the office of Sen. Mark Warner. The paper outlines a comprehensive regulatory regime that would touch virtually every aspect of social networks.

The paper is notably well-versed both on the dangers posed by misinformation and the trade-offs that come with increased regulation, especially to privacy and free speech. It’s less a polemic than a comprehensive starting point for discussion — and as talk of regulation spreads around the world, I imagine it will prove influential.

So what exactly do Warner and his staff propose? The ideas are designed to address three broad categories: misinformation, disinformation, and the exploitation of these technologies; privacy and data protection; and competition. (On the last point, the good news for tech platforms is that even Warner isn’t calling for them to be broken up. The paper does not, in other words, challenge the idea that social networks of these size should exist.)

Here are some highlights of the ideas presented.

Misinformation, disinformation, and the exploitation of technology. Ideas here include requiring networks to label automated bots as such; requiring platforms to verify identities, despite the significant consequences to free speech; legally requiring platforms to make regular disclosures about how many fake accounts they’ve deleted; ending Section 230 protections for defamation; legally requiring large platforms to create APIs for academic research; spending more money to fight cyber threats from Russia and other state-level actors.

Privacy and data protection. Create a US version of the GDPR; designate platforms as “information fiduciaries” with the legal responsibility of protecting our data; empowering the Federal Trade Commission to make rules around data privacy; create a legislative ban on dark patterns that trick users into accepting terms and conditions without reading them; allow the government to audit corporate algorithms.

Competition: Require tech companies to continuously disclose to consumers how their data is being used; require social network data to be made portable; require social networks to be interoperable; designate certain products as “essential facilities” and demand that third parties get fair access to them.

It’s a lot to take in — and a lot of fun to consider! I recommend reading the entire report, and discussing it with your children over dinner.

In America, the report remains mostly a pipe dream. But around the world, similar ideas are gaining momentum. Over the weekend, a British parliamentary committee recommended imposing much stricter guidelines on social networks. Here’s David D. Kirkpatrick in The New York Times:

Among other proposals, the committee called for the regulators who oversee television and radio to set standards for accuracy and impartiality on social media sites, for the establishment of a “working group of experts” to rate the credibility of websites or accounts “so that people can see at first glance the level of verification,” and for a new tax on internet companies that would pay for expanded oversight.

To address influence campaigns, the committee called for the mandatory public disclosure of the sponsors behind any online political advertisement or paid communication, as required in traditional news media outlets — an idea that was proposed in Congress as well.

These proposals remain far from becoming law — but perhaps not as far as tech platforms would wish.

Democracy

Here’s why Facebook suspended Alex Jones but not Infowars

The rules governing discipline on Facebook are quite Byzantine, and Kurt Wagner does a good job here laying out what exactly is going on with Alex Jones and the various pages to which he posts. I’ll repeat my point from Friday: it’s time to rethink some of this stuff.

Alex Jones’ personal user profile is an admin for a number of Infowars-related Pages, which means he has permission to post or share videos to those Pages. Each time Jones shares a post that violates Facebook’s policies to one of those Pages, both Jones’ user profile and the Page receive some kind of “strike” against their record — essentially, a warning from Facebook to take the post down and cut it out.

But the reason Jones was suspended, but his Pages are still up, is that Jones posted the same bad content to multiple pages, drawing multiple strikes against his record. So if Jones shared three bad videos to three different pages, for example, he would receive nine total strikes, whereas each Page would receive just three.

YouTube search results for A-list celebrities hijacked by conspiracy theorists

Here’s a good example of how social networks are vulnerable to asymmetries of passion. A surge of people suddenly become interested in Tom Hanks, leading their videos to rise to the top of search results. What YouTube doesn’t know, because computers are dumb, is that the videos are all baseless accusations against Hanks, rooted in baseless conspiracies. YouTube has lately gotten better at policing search results in the wake of mass-casualty events. It needs to become similarly adept at monitoring the rise of conspiracies on other sites so it can address them as they start popping up as videos. (One advantage YouTube has is it can often take conspiracies longer to migrate there, because video usually takes longer to produce than text.)

The FBI Set Up A Task Force To Counter Russian Trolls. So Far, It’s Been Silent.

The FBI set up the Foreign Influence Task Force to monitor Russian trolls. We have basically no idea what, if anything, it is doing. Sources tell Kevin Collier it isn’t doing much.

Elsewhere

Facebook’s Next Privacy Challenge: Less Data to Target Ads

For a long time, advertisers on Facebook could buy or rent data from various brokers to improve their targeting capabilities. Then Cambridge Analytica came along, and Facebook announced it would kill off that tool, known as Partner Categories. It is currently dying slowly around the world, and will be completely dead by October 1st, and now some advertisers are worried their ads will be less effective. Please keep these advertisers in your thoughts and prayers during this difficult time!

It’s Rubens vs. Facebook in fight over artistic nudity

Facebook is blocking Belgian museums from promoting the paintings of the old master Peter Paul Rubens, apparently because they contain bare breasts and buttocks. These museums may need to take a different approach here. Have they considered denying the Holocaust?

Tech Bloodletting Nears $300 Billion Since Facebook Reported

The big five tech companies saw their stocks fall another 2.5 percent, and are down 9 percent in the past three days, amid the fears kicked off by Facebook’s most recent earnings report. The S&P 500 lost 1.4 percent of its value over the same period, Elena Popina reports.

Twitter is funding college professors to audit its platform for toxicity

Twitter has announced the researchers it chose to study the “health” of conversations on the platform, and how they might be improved. My colleague Shannon Liao:

The team of researchers will be led by Dr. Rebekah Tromble, an assistant professor at Leiden University in the Netherlands who focuses on politics in social media. They will investigate how toxic speech is created on Twitter. The idea that the researchers are working off of is from previous Leiden research, which found that when a group of like-minded people gathers to discuss similar perspectives, they’re encouraged to hate those not engaged in the same discussion, thus creating an echo chamber. The researchers will see how many users exist in these echo chambers and how many users are actually talking to others with diverse perspectives.

The team will also create algorithms to track whether conversations on Twitter are “uncivil” or if they veer into “intolerant” in what could be hate speech. Uncivil conversations can sometimes be problematic, but they’re also good for political dialogue, while hate speech is “inherently threatening to democracy,” according to Twitter. The implication is that once the researchers successfully identify the differences between these two kinds of conversations, Twitter will become better equipped to target hate speech, while keeping uncivil discourse in check.

Twitter says that it will begin suspending repeatedly abusive Periscope commenters

Twitter is going to start suspending abusive Periscope commenters on August 10th. Until then — go crazy, folks!

Launches

WhatsApp’s new group video calling feature is now live

Today in features you assumed WhatsApp already had: multi-person video chat.

Domino’s Is Bringing Its Pizzas Into Augmented Reality With a National Snapchat Campaign

Several months ago my editor told me about a new augmented reality startup whose gimmick was that they would make lifelike AR representations of food, so you could open their app at a restaurant and picture what the food would look like if it was right in front of you. My editor and I thought this was an incredibly funny case of a technology in search of a problem to solve, and we laughed about it until we cried.

Anyway this startup partnered with Snap over the weekend to show you what Domino’s pizza looks like inside Snapchat.

Takes

Facebook Lenses

Ben Thompson says Facebook’s fundamental business is strong, and also that it has no real competition, so Wall Street should relax. Thompson is read obsessively inside Facebook, and I imagine this (long!) essay will calm a lot of nerves.

The Cost of Policing Facebook and Twitter Is Spooking Wall St. It Shouldn’t.

Spending many billions of dollars to improve the health of the platforms might make Facebook even more valuable than they are today, says Peter Eavis in one of those why-does-anyone-even-have-to-say-this-out-loud kind of takes:

But Facebook’s and Twitter’s growth, financial success and stock performance almost depend on maintaining networks that feel safe to users. Explaining the investments in security, Jack Dorsey, Twitter’s chief executive, said on Friday, “We do believe, ultimately, over time, that this will help our growth story and encourage more people to stay with Twitter and also tell their friends, family and colleagues about all the value they’re getting out of it.”

We’re Lucky Mark Zuckerberg Is in Charge

Mark Zuckerberg’s critics should be quiet because they have not had to deal with what Zuckerberg has had to deal with, says former Facebook employee and current Wired audience developer manager Alex Whitcomb:

Mark Zuckerberg is imperfect, like all of us, and what’s easy is berating him from the outside of the ring. But no one knows that ring—no one in history has ever even seen it. And while he deserves criticism, accountability, and honesty, I for one am glad he’s the one that’s in there.

And finally …

John Oliver fixes Facebook’s apology ad to remind you that the company still doesn’t care about you

I know that “John Oliver destroys” is a fairly tired genre by now. But he’s sharp — if a bit over the top — on its rather smarmy television ads of late. It zeroes in on a theme that has dominated here in the past few weeks: that there’s a hypocrisy in telling people how sorry you are about fake news at the same time you’re defending the rights of Holocaust deniers.

Is it just me or did these final items used to be a lot funnier????

Talk to me

Send tips, corrections, and regulations. casey@theverge.com