A conservative Facebook would probably fail — but Facebook should worry about it anyway

In the days after the 2016 election, Gizmodo published a much-discussed story that predicted our current moment. The story, written by Michael F. Nuñez, alleged that Facebook had built a tool to reduce the spread of fake news — and then abandoned the tool over fears it would disproportionately affect right-wing news sites. Nuñez reported that Facebook had undertaken a high-level overview of its products in an effort to identify ways in which they might be biased against conservatives.

Facebook denied most of the substance of the story. But subsequent reporting reporting did find concerns within the company that right-wing users might someday turn on the service en masse. (See the now-famous May 2016 meeting between Zuckerberg and 16 prominent conservatives.) As I wrote in 2016:

When an earlier generation of media companies acted as gatekeepers against false and misleading stories, they created a market for alternative media. That led to the rise of conservative talk radio, Fox News, and (most recently) the alt-right. Facebook’s worst nightmare is that conservatives stop seeing it as a neutral platform, and create a fair-and-balanced social network of their own.

On Thursday, that nightmare gained a bit of momentum. Axios’ Mike Allen spoke to Donald Trump Jr., who said he was in the market for a “conservative, Facebook-like social network,” which he would heavily promote to his millions of followers:

When I asked him if his father’s 2020 campaign might build such a platform, Don Jr. said: “I’d love to do it. But what I would prefer is, take one of the two Silicon Valley conservatives and let them start it. And then I’d help promote the platform and be all over that.”

Scary thought: Imagine tribal news delivered via tribal pipes. And, as one mischievous Trump adviser told us, imagine the president moving his Twitter show to that network.

Of course, merely wishing for a conservative Facebook won’t make it so. There’s already an alternative right-wing Twitter named Gab, and after two years it seems barely afloat. It’s also not clear an ideologically pure social network would even be that much fun to use; as Joe Weisenthal put it: “a right-wing only social network will give users no way to trigger the libs, and so what’s the point? People will just get bored.”

Moreover, the existing Facebook has been an incredible boon to the conservative movement, as NewsWhip’s rankings of the most popular publishers consistently attest. (Maya Kosoff: “the fact of the matter is that a legitimate ‘Facebook for conservatives’ would look . . . a lot like Facebook.”)

At the same time, there appears to be at least some level of risk that this drumbeat of bias complaints is doing lasting damage to Facebook’s image. On Wednesday the Media Research Center, a partisan organization devoted to promoting the idea that the media is biased against conservatives, published the results of a poll it sponsored. sample size was small — just 351 people — but the views are consistent with the ideas that Republican lawmakers have been floating in hearings lately. And it doesn’t help that trust in Facebook has been declining generally:

Of conservatives who have used Facebook, 32.3% say they have either left (7.5%), or are considering leaving (24.8%), Facebook due to its censorship of conservative views.

What’s more, nearly two-thirds (66.9%) of conservative likely voters have less trust in Facebook than they did a year ago. Likewise, nearly two-thirds (66.1%) do not trust Facebook to treat all political views equally and 64.6% believe sites like Facebook are intentionally censoring conservatives and conservative ideas.

A frustration I have with this poll — and with the whole debate, really — is that the terms are so imprecise. What would mean for Facebook to “treat all political views equally”? Facebook was built to be personalized to the individual; if you want to see nothing but Fox News in your feed, you very easily can. Conversely, were Facebook to inject an equal bunch of articles from CNN or the New York Times into your heavily curated, Fox-only feed, you would likely see Facebook as more biased than it is today.

But it’s hard to fight a fundamentally emotional argument with reason. (That sentence also doubles as my preview of next week’s Congressional hearings.) Mainstream press outlets have largely been unable to convince conservatives that they are practicing journalism in good faith. And I suspect Facebook will have just as hard a time.

It seems almost quaint now to think about the months before the election, when Facebook scoured its products for evidence of actual bias against conservatives. It seems clear now that no matter what it found, or did, it would be facing the same fury it did today. And given the bad-faith arguments it rests on top of, it’s not at all clear what Facebook can do about it.


Trump Says Google, Facebook, Amazon May Be ‘Antitrust Situation’

President Trump told Bloomberg that Google, Facebook, and Amazon might be in a “very antitrust situation.” One of the most dreaded of all antitrust situations.

Jack Dorsey Gets His Day in Washington

Selina Wang says Twitter’s CEO will face the harshest scrutiny of anyone when he testifies before Congress (for the first time!) next week. The reason is because Twitter has fewer resources at its disposal, and also its decision-making is often opaque and incoherent:

Twitter Inc. is in a more precarious position than its larger competitors, though. Dorsey’s company can’t match their user bases or cash reserves, and the modest user growth he’s fostered over the past two years could vanish if Twitter starts losing conservatives over concerns, warranted or not, about bans and “shadow bans” (in which a user’s content is invisible to everyone but ­themselves—a ­practice Twitter says it doesn’t engage in). On the other side, the service could lose liberals who won’t participate on a site they perceive to be fostering abusive speech or bending rules to accom­modate conservatives.

The latest flashpoint is a decision Apple, Facebook, Spotify, and Google-owned YouTube made in early August to purge content from Alex Jones, the shock-radio host and creator of the website InfoWars, over posts and videos that violated their hate-speech and harassment policies. Twitter has also been under mounting pressure to ban Jones, notably for spreading false assertions that the 2012 mass shooting at Sandy Hook Elementary School in Connecticut was a hoax. After his competitors’ decision, Dorsey tweeted that Jones “hasn’t violated our rules” and implied that other platforms had caved to political pressure.

Senators Criticize Google CEO for Declining to Testify

Lawmakers are upset that Sundar Pichai isn’t going to testify next week alongside Sheryl Sandberg and Jack Dorsey, Steven T. Dennis reports:

Senator Angus King, a Maine independent who caucuses with Democrats, said Google and the other companies should all send their CEOs.

“This is the United States Senate, this is an important issue, and we deserve to hear from the decision-makers, not the people who carry out the decisions,“ King said.

Hatch asks FTC to investigate Google’s market dominance

Sen. Orrin Hatch (R-Utah) has asked the Federal Trade Commission to re-open the investigation into whether Google’s search and digital advertising practices are anti-competitive, Harper Neidig reports. Note that this request is based in reality — unlike, say, claims that Google search results are “rigged” against the president.

Hatch sent a letter to FTC Chairman Joseph Simons expressing concern about reports in recent years ranging from Google restricting competing advertising services to collecting data from users’ Gmail inbox contents.

“Needless to say, I found these reports disquieting,” Hatch wrote. “Although these reports concern different aspects of Google’s business, many relate to the company’s dominant position in search and accumulating vast amounts of personal data.”

Trolls for Sale in the World’s Social Media Capital

Jonathan Corpus Ong examines how Rodrigo Duterte’s cultivation of a troll army prefigured Russia’s 2016 influence campaign:

Duterte’s campaign machinery strategically focused onassembling bloggers, digital influencers, and fake account operators to tap into the public’s deep-seated anger—and convert these emotions into votes on election day. This was initially a cost-saving maneuver for an “outsider” candidate lacking extensive political resources, but it worked to great effect. This tactic owed much of its success to the fact that the Philippines is the world’s “social media capital,” with the average Filipino spending more time on social media than any other nationality.

WhatsApp kicks off radio campaigns in India to tackle fake news

Facebook already did a print campaign to warn against fake news; now there’s a radio campaign to go along with it, reports wire agency PTI. The initial ads are in Hindi but will expand to include other languages:

“The radio campaign will air starting today across 46 Hindi-speaking stations of All India Radio (AIR) across Bihar, Jharkhand, Madhya Pradesh, Chhattisgarh, Rajasthan, Uttar Pradesh and Uttarakhand,” a WhatsApp spokesperson told PTI. […]

These campaigns advise users to verify authenticity of messages before forwarding them and to report content that they might find to be inflammatory. It also cautions users to be careful about forwarding messages that contain misinformation and said doing so, could have serious repercussions.


The People With Power at Facebook

Sarah Kuranda does us all a great service by offering an up-to-date org chart. Sarah, may you continue to update this in all perpetuity. I bookmarked it immediately, and I’m sure I’m not alone.

How Misinfodemics Spread Disease

Misinformation influences more than politics. It’s actually making us sicker, Nat Gynes and Xiao Mina report: “Researchers are finding more and more that online misinformation fuels the spread of diseases such as tooth decay, Ebola, and measles.”

Recent research found that Twitter bots were sharing content that contributed to positive sentiments about e-cigarettes. In West Africa, online health misinformation added to the Ebola death toll. In New South Wales, Australia, where the spread of conspiracy theories about water fluoridation run rampant, children suffering from tooth decay are hospitalized for mass extractions at higher rates than in regions where water fluoridation exists. Over the past several weeks, new cases of measles—which the Centers for Disease Control and Prevention declared eliminated from the United States in 2000—have emerged in places such as Portland, Boston, Chicago, and Michigan; researchers worry that the reemergence of preventable diseases such as this one is related to a drop in immunization rates due to declining trust in vaccines, which is in turn tied to misleading content encountered on the internet. With new tools and technologies now available to help identify where and how health misinformation spreads, evidence is building that the health misinformation we encounter online can motivate decisions and behaviors that actually make us more susceptible to disease.

Grindr IPO for gay dating app

Grindr essentially invented modern gay dating and has had a dramatic (and too-little-explored) affect on the culture. It’s now preparing to go public, and good lord I can’t wait to read the risk factors on that S1. Here you have a network where an extremely number of high-profile individuals are regularly sexting and exchanging nudes with often-fake profiles around the world, providing a ready source of kompromat to, say the Chinese government, which almost certainly will take an interest in everything passing through the Kunlun Group’s servers, assuming they haven’t already????

Johnny Johnny Yes Papa explained by the internet’s best meme creators

Sometimes a meme comes along at the perfect moment, and as soon as you see it, you know why it has captured the internet’s imagination. Johnny Johnny Yes Papa is not one of those memes. In fact Julia Alexander had to interview an army of experts to even begin to understand why a simply nursery rhyme (that does not rhyme!!!) has racked up billions of views across all manner of disturbing YouTube channels. This was the explanation that resonated most with me:

Creator behind Welcome to My Meme Page (300,000 Facebook followers): I think Demons have descended upon our world. We are thrashing under the fever of a Great Sickness, yet we do not know it.


Twitter will begin labeling political ads about issues such as immigration

Twitter is adopting Facebook’s rules requiring political advertisers to verify their identities, while carving out an exemption for news reports. (Facebook has not created such an exemption, and many outlets are still struggling to understand their responsibilities, as this piece in India’s Caravan indicates.) The impact of this will be fascinating to watch — Russia’s RT network tweets news, but it’s also funded by the Kremlin — making it arguably just political advertising by another name. Twitter has made journalists happy here, but it also may have just created a loophole for influence campaigns to exploit. Here’s Tony Romm:

Twitter said Thursday that it would begin requiring some organizations that purchase political ads on topics such as abortion, health-care reform and immigration to disclose more information about themselves to users, part of the tech giant’s attempt to thwart bad actors, including Russia, from spreading propaganda ahead of the 2018 election.

The new policy targets promoted tweets that mention candidates or advocate on “legislative issues of national importance,” Twitter executives said in a blog post. To purchase these ads, individuals and groups must verify their identities. If approved, their ads then would be specially labeled in users’ timelines and preserved online for the public to view. And promoted tweets, and the accounts behind them, would be required disclose the name of the actual organization that purchased the ad in the first place.


Why Google Doesn’t Rank Right-Wing Outlets Highly

Google News ranks right-wing outlets lower than mainstream outlets because they don’t do much reporting or adhere to basic journalistic standards, says Alexis Madrigal:

But even if the methodology is flawed, Google applies it equally to all the media organizations in its news universe. It might not be a “free” marketplace of ideas, but it is a marketplace with fairly well-known and nonpartisan rules. If right-wing sites aren’t winning there, maybe Google isn’t the problem.

And finally …

Snapchat, Weather Channel, and others hit with anti-Semitic vandalism

Operate a walled garden and journalists will rail against you for your greed. Operate a more open system and journalists will constantly ask, how did you let this happen???

Today we had one of the latter cases:

New Yorkers who opened up Snapchat, The Weather Channel, CitiBike, or a number of other apps and services this morning found that the name of their city had been swapped with anti-Semitic vandalism, replacing it with “Jewtropolis.”

The offensive change appears to have been a result of edits to Mapbox, a widely used service that powers the maps inside of all these apps and more. The change was also spotted inside the app for StreetEasy and on The New York Times’ map of 2016 election results. Mapbox also lists Vice, Vox (our sibling site), and the FCC as groups that have made use of its maps, however, the vandalism didn’t show up on those sites.

I can’t help but feeling like, were there not so many Nazis walking around these days, this would seem more like a junior-high prank than a genocidal influence campaign. But, you know!

Talk to me

Send tips, comments, questions, and suggested features for partisan social networks: casey@theverge.com.

Why Facebook banned Alex Jones — and Twitter didn’t

On one hand, we spent maybe too much time this week on the question of whether one person should lose access to his social media accounts. On the other hand, it’s a question that illuminates some of the central tensions that led me to start this newsletter. How can social media be used to do harm? Can tech companies effectively rein in their worst users? Also, what the hell is Twitter’s deal?

Will Oremus tries to answer the latter question with some reporting on what people inside Twitter are saying about Alex Jones. He offers a handful of theories on the company’s paralyzed, contradictory stances on Infowars. First, there’s Twitter’s bias toward inaction on almost all things; second, there’s its terror of being called partisan by conservatives or by Congress. There’s also the possibility that Twitter will ban Jones, and is still finalizing its public case for doing so.

Finally, Oremus concludes, is the possibility that there’s currently a big internal fight about Jones that hasn’t been resolved yet. This is my own theory, and here’s a smidge of evidence that it’s true. Yesterday I asked the company for comment on Oliver Darcy’s damning report showing that, contrary to Twitter’s public statements, Jones had repeatedly violated the Twitter rules. The company told me a statement was coming, then never delivered. That’s the sort of thing that happens when a company is still trying to figure out its own position.

Meanwhile in The New York Times, Kevin Roose has more detail on how Mark Zuckerberg made the decision to ban Jones from Facebook.

Mr. Zuckerberg, an engineer by training and temperament, has always preferred narrow process decisions to broad, subjective judgments. His evaluation of Infowars took the form of a series of technical policy questions. They included whether the mass-reporting of Infowars posts constituted coordinated “brigading,” a tactic common in online harassment campaigns. Executives also debated whether Mr. Jones should receive a “strike” for each post containing hate speech (which would lead to removing his pages as well as the individual posts) or a single, collective strike (which would remove the posts, but leave his pages up).

Late Sunday, Apple — which has often tried to stake out moral high ground on contentious debates — removed Infowars podcasts from iTunes. After seeing the news, Mr. Zuckerberg sent a note to his team confirming his own decision: the strikes against Infowars and Mr. Jones would count individually, and the pages would come down. The announcement arrived at 3 a.m. Pacific time.

Much attention has focused on how Facebook moved forward with a ban only after Apple did the same thing. To me, the preceding paragraph is just as noteworthy: it shows the company was already building its case for doing so when it kicked him off the platform. That speaks to something I said Tuesday: that the platforms all seemed to be moving independently to the same conclusion, reinforcing one another’s decisions along the way.

It made me think of a point Charlie Warzel made earlier this month:

A few months ago, during the rapid fallout of Facebook’s Cambridge Analytica scandal, a smart person mentioned to me the first rule of crisis PR. The idea is to quickly figure out what the ultimate end game of a disaster will be, and then cut all the bullshit and just jump straight to doing whatever uncomfortable thing you’ll inevitably have to do under duress days, weeks, or months later. I’ve been thinking a lot about that maxim the past two weeks as the platforms make declarations about Infowars as a legitimate publisher, followed by some hedging, then a bit of backtracking, some light finger-wagging, a short timeout, and finally an ominous suggestion that the publisher is on thin ice. All the statements, interviews, and bad press seems to be careening toward a particular outcome for Facebook, YouTube, and Infowars, and it seems as if everyone but the platforms knows it.

Facebook and YouTube have now careened all the way to that “particular outcome” — banning Jones — while Twitter is still contemplating the end game. I suspect the company will arrive at the same place its peers did eventually. The only question is how much self-inflicted damage it will sustain in the meantime.


Facebook pages with large U.S. following to require more authorization

It’s about to get a lot harder to run your “We Love Texas” Facebook page from Moscow:

The new measures will require administrators of Facebook pages to secure their account with two-factor authentication and confirm their primary home location.

Facebook will also add a section that shows the primary country from where a page is being managed.

Here’s How Russia’s Twitter Trolls Reacted To Charlottesville

This weekend is the anniversary of the deadly United the Right rally in Charlottesville. Peter Aldhous examines how Russian trolls tried to amp up the conflict on Twitter as it happened. Among other things, they heavily promoted the usage of the word “Antifa,” which then caught on.

By the summer of 2017, the Left Trolls were mostly a spent force. But that’s when nearly 130 Right Trolls, which posed as Trump supporters, had their big surge, their output rising to more than 10,000 tweets a day until suddenly dropping away after Aug. 18 — presumably when Twitter banned many of the accounts.

How Big Is the Alt Right? Inside My Futile Quest to Count

Does the white nationalist movement sometimes called “the alt-right,” which can feel ubiquitous on social media, get too much attention? Emma Grey Ellis tries to count up its members:

In Charlottesville, the best estimates put rally participant numbers between 500 and 600 people. For context, that’s five times as big as any far-right rally in the last decade, but is still only a tiny fraction of what you’d expect from their (inflated) digital footprint.

It’s also two hundred times smaller than 2017’s March for Science, and a thousand times smaller than 2017’s Women’s March. All signs point to an even lower turnout for Unite the Right in DC.

Students In Bangladesh Are Deleting Their Posts About The Protests Because They’re Scared Of Reprisals

A social media crackdown in Bangladesh is what actual censorship of free speech looks like, in case anyone is wondering:

Students who had been part of the demonstrations told BuzzFeed News they were terrified of arrest following the protests and were deleting any messages of support they’d posted online, while a photojournalist who was badly injured covering the demonstrations described the situation as chaos and said anyone with a camera had become a target.

Much of this fear rests on a loosely worded law passed in Bangladesh 12 years ago — widely referred to as Section 57 — that allows for the prosecution of anyone posting material online that the authorities determine could “deprave and corrupt” its audience, cause a “deterioration in law and order,” or prejudice “the image of the state or a person.”


The Story Behind The Story That Created A Political Nightmare For Facebook

John Cook, former editor of Gawker Media, posts an odd “defense” of the 2016 Gizmodo story alleging that Facebook “routinely suppressed conservative news.” In it, he acknowledges that the story was framed in an aggressively conspiratorial way so as to draw the attention – and attendant clicks — of the Drudge Report audience. It confirms, rather than debunks, the idea that the story was framed in a misleading way so at to draw the maximum amount of outrage of conservatives, despite the fact that conservative news sources continue to draw more engagement than any other ideology on the platform.

Joshua Benton’s Twitter thread on how the Gizmodo story backfired says it all better than I can:

The most engaged publishers on Facebook in July 2018

And while we’re once on the subject of “suppressing conservative speech,” here’s the monthly report from Newswhip:

Fox News retained its top position, with 38.6 million engagements, and increased its lead over second-placed CNN.

Facebook, still on a mission to bring people online, announces Connectivity

Facebook’s broadband and infrastructure projects have been reorganized into something called Facebook Connectivity, Rich Nieva reports. Here’s what project leader Yael Maguire said about the decision to stop building the internet plane Aquila, which crashed on its maiden voyage.

“I don’t think of it as a retreat,” Maguire said when asked about the decision. “If I wear the ‘I’m an engineer’ hat and I love to focus on the things that I build, yeah, maybe it’s a little disappointing what’s happening in the market. But if I take a step back as the person who’s focused on these efforts … it’s fantastic what’s happening globally with companies like Airbus and others who are focused on this as a potential market.”

Facebook is shutting down Friend List Feeds today

Friend lists, which were automated feeds of posts from your coworkers, classmates, and so on, are going away for lack of use.

The local-news crisis is destroying what a divided America desperately needs: Common ground

Relevant data point from Margaret Sullivan for the discussion around whether journalists can lead the charge against misinformation: employment in newspaper newsrooms has declined by almost 50 percent in the past decade.

Facebook’s David Marcus Steps Down From Coinbase’s Board

David Marcus was on the board of directors at the cryptocurrency exchange Coinbase; recently he took over blockchain efforts at Facebook and the potential conflicts led him to quit Coinbase.

HQ Trivia runs first traditional commercial before the game

HQ, which has been slow to monetize, finally started showing ads this week.


L’Oreal adds to Facebook sales push with virtual make-up tests

One thing that AR is already very good at is showing people what makeup will look like on them. L’Oreal bought an AR company this year and plans to roll out shoppable makeup filters on Instagram.


Twitter is wrong about Alex Jones: facts are not enough to combat conspiracy theories

My colleague Laura Hudson examines the idea that journalists can effectively moderate Twitter by countering conspiracy theories with facts. This piece is very sharp and rather depressing!

A growing body of research has demonstrated that the distorted light of modern media does not always lead to illumination. In a 2015 paper, MIT professor of political science Adam Berinsky found that rather than debunking rumors or conspiracy theories, presenting people with facts or corrections sometimes entrenched those ideas further.

Another study by Dartmouth researchers found that “if people counter-argue unwelcome information vigorously enough, they may end up with ‘more attitudinally congruent information in mind than before the debate,’ which in turn leads them to report opinions that are more extreme than they otherwise would have had.”

A 2014 study published by the American Academy of Pediatrics similarly found that public information campaigns about the absence of scientific evidence for a link between autism and vaccinations actually “decreased intent to vaccinate among parents who had the least favorable vaccine attitudes.” When people feel condescended to by the media or told that they are simply rubes being manipulated — even by expert political manipulators — they are more likely to embrace those beliefs even more strongly.

Platforms, Speech And Truth: Policy, Policing And Impossible Choices

Here’s a Jack Dorsey-approved take from Mike Masnick on how Twitter should approach the Alex Jones question, with banning as an absolute last resort. Give people more tools to control what they see, he argues. (Counterpoint: platforms already do! Hate speech is spreading virally anyway, with deadly consequences.)

As for me, I still go back to the solution I’ve been discussing for years: we need to move to a world of protocols instead of platforms, in which transparency rules and (importantly) control is passed down away from the centralized service to the end users. Facebook should open itself up so that end users can decide what content they can see for themselves, rather than making all the decisions in Menlo Park. Ideally, Facebook (and others) should open up so that third party tools can provide their own experiences – and then each person could choose the service or filtering setup that they want. People who want to suck in the firehose, including all the garbage, could do so. Others could choose other filters or other experiences. Move the power down to the ends of the network, which is what the internet was supposed to be good at in the first place. If the giant platforms won’t do that, then people should build more open competitors that will (hell, those should be built anyway).

But, if they were to do that, it lets them get rid of this impossible to solve question of who gets to use their platforms, and moves the control and responsibility out to the end points. I expect that many users would quickly discover that the full firehose is unusable, and would seek alternatives that fit with how they wanted to use the platform. And, yes, that might mean some awful people create filter bubbles of nonsense and hatred, but average people could avoid those cesspools while at the same time those tasked with monitoring those kinds of idiots and their behavior could still do so.

Bots vs. Trolls: How AI Could Clean Up Social Media

Christopher Mims says it’s time to fight bots with bots:

While some attempts to detect social-media accounts of malicious actors rely on content or language filters that terrorists and disinformers have proved capable of confusing, Mr. Alvari’s algorithm looks for accounts that spread content further and faster than expected. Since this is the goal of terrorist recruiters and propagandists alike, the method could be on the front lines of algorithmic filtering across social networks. Humans still need to make the final determination, to avoid false positives.

Algorithms could also be used to identify and disrupt social-media echo chambers, where people increasingly communicate with and witness the behavior of people who align with their own social and political views. The key would be showing users a deliberately more diverse assortment of content.

How journalists should not cover an online conspiracy theory

Whitney Phillips has some words of warning for journalists writing about QAnon and other insane conspiracy hoaxes:

The final question reporters must ask themselves stems from the fact that journalists aren’t just part of the game of media manipulation. They’re the trophy. Consequently, before they publish a word, journalists must seriously consider what role they’ll end up playing in the narrative, and whose work they’ll end up doing as a result.

In the context of the QAnon story, participants’ efforts to pressure, even outright harass, reporters into engaging with the story has been widely interpreted as proof of how seriously participants take the story, and therefore as proof of how worried we all should be.

To Auto-Archive Or To Not Auto-Archive, Twitter Edition

MG Siegler explores the question of whether tweets should archive automatically the way Instagram stories do. (I think it should be an option.)

Still, it feels like having some optionality here with regard to the longevity of public tweets is the right call. I’m fine with leaving the default as “public forever” but maybe some tweets just make more sense for a moment in time… Or maybe some accounts would be happier letting tweets live for a certain amount of time by default. This isn’t an easy thing to think through, so I don’t envy Twitter on this topic.

And finally …

Meet the Poet Laureate of Tinder

Not having any luck on Tinder? Have you considered writing sonnets? Drew, a twentysomething educator living in Florida, did just that, charming his matches with poems that were also acrostics spelling out such Tinder-favorite pickup lines as SEND NUDES and WANNA SMASH. A Reddit post about his work is now one of the most upvoted posts of all time.

One poem actually led to a long-term thing. “I had a six-month relationship start from anonymous poetry shenanigans on Yik Yak as well as more than my fair share of Tinder dates from spontaneous sonnets written to order.”

Well, I know what I’m doing this weekend.

Talk to me

Send me tips, questions, comments, weekend plans: casey@theverge.com.

A shadowy influence campaign on Facebook is targeting liberal activists

We are 97 days away from the midterm elections, there’s an active campaign to undermine our democracy on Facebook, and no one can say for certain who’s behind it. That was the big news to emerge on Tuesday in a call that Facebook held with reporters, shortly after The New York Times broke the news that the company had detected an “ongoing political influence campaign” that led it to remove 32 pages and fake accounts from the service.

Facebook laid out its major findings in a blog post. On one hand, the number of fake accounts caught by Facebook here is relatively small. On the other, they were followed by 290,000 people. The accounts were created between March 2017 and May of this year and included such pages as “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters.”

The pages were quite active, posting 9,500 times before they were shut down. And they ran ads: 150 of them, at a cost of $11,000. (Notably, these accounts stopped posting after Facebook implemented new disclosure requirements for advertisers.)

A post from a removed page (Facebook)

A post from a removed page (Facebook)

Most provocatively, the pages seemed to be focused in part on fomenting real-world dissent. Facebook found that the pages had created 30 events since last year, the largest of which had 1,400 people scheduled to attend. According to the Digital Forensic Research Lab, a think tank focused on preventing election interference that has a partnership with Facebook, the fake accounts exclusively targeted the American left.

Facebook said it had been compelled to disclose its findings ahead of protests connected to a “Unite the Right” rally planned for August. Some of the removed pages were planning or supporting protests for the rally, which is a sequel to an event last year that turned deadly when a man linked to neo-Nazis drove a car into a group of anti-racist protesters.

The Facebook pages were divisive — and effective, according to DFRL. “Of note, the events coordinated by — or with help from — inauthentic accounts did have a very real, organic, and engaged online community; however, the intent of the inauthentic activity appeared to be designed to catalyze the most incendiary impulses of political sentiment,” it said in a blog post.

Citing various linguistic quirks, the lab concludes that it is reasonable to conclude that at least some of the fake accounts were Russian in origin. It also concludes that these operations are becoming more difficult for Facebook to detect:

Their behavior differed in significant ways from the original Russian operation. Most left fewer clues to their identities behind, and appear to have taken pains not to post too much authored content. Their impact was, in general, lower, compared with the 300,000 followers amassed by Russian troll account “Black Matters.”

Information operations, like other asymmetric threats, is adaptive. These inauthentic accounts, whoever ran them, appear to have learned the lessons of 2016 and 2017, and to have taken more steps to cover their traces. This was not enough to stop Facebook finding them, but it does reveal the challenge facing open source researchers and everyday users.

As the day went on, another challenge emerged: real Americans complaining their events had been shut down unfairly. It appears that Facebook deleted events even if they had just a tangential connection to one of the authentic accounts. “The Unite the Right counter protest is not being organized by Russians,” organizer Dylan Petrohilos tweeted. “We have permits in DC, We have numerous local orgs like BLM, Resist This, and Antifascist groups working on this protest. FB deleted the event because 1 page was sketch.”

In a separate thread, an organizer named Brendan Orsinger elaborated on how the event was removed. Orsinger, who is helping to organize a Unite the Right counter-protest, had added the “Resisters” page as a host of the event to help promote it. “The Resisters page was a 20K follower social media megaphone,” he tweeted, “and it helped us reach more folk.” Unfortunately for Orsinger, the page was run by someone who was faking their identity. So when Facebook killed the Resisters page, it also killed the events that page was “hosting.” (Facebook says Resisters created the event and only then invited legitimate pages to co-host.)

You can see how tricky these issues are. You can also see how unlikely it is that we’ll untangle all of them before November 6th. Facebook deserves credit for disclosing the threats publicly, in something close to real time. Especially given that the disclosure only serves to make us more worried than we were before.

And in the meantime, the mysterious maybe-Russian agents behind the current assault have succeeded in amplifying division and doubt, online and off.


Facebook shuts off access to user data for hundreds of thousands of apps

Here’s some more Cambridge Analytica fallout:

Facebook this evening announced that it’s shutting off access to its application programming interface, the developer platform that lets app makers access user data, for hundreds of thousands of inactive apps. The company had set an August 1st deadline back in May, during its F8 developer conference, for developers and businesses to re-submit apps to an internal review, a process that involves signing new contracts around user data collection and verifying one’s authenticity.

Why We’re Sharing 3 Million Russian Troll Tweets

FiveThirtyEight is sharing 3 million tweets from 2,848 Twitter handles associated with the Internet Research Agency troll farm. They’re doing it in the hopes that lots more researchers will dig into the archives and public their findings.

Bigfoot erotica and Denver Riggleman, explained

Okay, I swear I tried very hard to ignore the fact that yesterday “Bigfoot erotica” took over Twitter and was somehow connected to a race for a House seat in Virginia. But on Tuesday, I caved and read about it, and here’s what you need to know: a man with the improbable name of Denver Riggleman is the Republican nominee in Virginia’s Fifth Congressional District. Per this Matt Yglesias piece, Riggleman is the co-author of a self-published 2006 book called Bigfoot Exterminators, Inc.: The Partially Cautionary, Mostly True Tale of Monster Hunt 2006, a work about people who look for Bigfoot.

Riggleman posted some explicit photos of Bigfoot to his Instagram (???), which were subsequently shared by his Democratic opponent, drawing attention to the thriving genre of Bigfoot erotica (?!?!?!), and also the whole thing was maybe a scheme to draw attention to the fact that Riggleman has associated with white supremacists.

Anyway, sorry, that’s why Bigfoot erotica was all over Twitter.


Can Comedy Twitter survive James Gunn, Dan Harmon and a war on jokes?

Twitter has been a boon to comedians. But some jokes age much, much worse than others, and now digging up old problematic tweets has become a go-to tactic of the right wing. Julia Alexander explores that tension here. My proposed solution: allow Twitter users to set their tweets to expire. Better yet: allow us to highlight a few prized tweets that we want to keep around forever, and let us pin them to our profiles.

Facebook might start holding singing competitions among its users

It’s a proven fact that Russian actors cannot sow division in the United States while they are participating in singing competitions. To that end:

Facebook appears to be working on a talent show feature that would have users record themselves singing and then submitting their videos for critique. In the app’s code, researcher Jane Manchun Wong spotted an interface that would let users choose a popular song and then record themselves singing it.

What Happened When I Tried Talking to Twitter Abusers

Here is a person who was receiving a lot of ugly tweets and tried to engage with her attackers and found that they were not receptive to what she was saying.

4,925 Tweets: Elon Musk’s Twitter Habit, Dissected

Some fun nuggets in this graphical breakdown of Elon Musk’s tweets. He replies more than any other big tech company CEO, and he is a populist: about 41 percent of his replies are to people who have fewer than 500 followers.

Goodreads and the Crushing Weight of Literary FOMO

Angela Waterslicer has a fun piece about how she forgot to turn off Goodreads notifications for many years, and now every day she gets shamed about how few books she reads, and she talks to the Goodreads CEO about it. I also had the not-reading-books problem until last year, when I finally integrated audiobooks into my reading and managed to get up to about two books a month. It’s my favorite lifehack of the past decade.

Journalist Mocked After $22 Avocado Toast Purchase Backfires

Many people have written to The Interface asking when we are going to comment on Friend of the Newsletter Taylor Lorenz’s viral tweet in which she was disappointed by the avocado toast that she ordered through Seamless, which cost $22. Credit to Lorenz here: she managed to turn a brutal self-own into a thoughtful examination of how viral tweets bring out the very worst in Twitter, as the author is quickly subjected to a comical amount of abuse from every imaginable angle.

Say what you will about $22 toast: Taylor’s initial tweet essentially served as bait that allowed her to illuminate the bizarre viral dynamics of our current content moment in a more personal way than her stories typically allow for. Also, anyone who wants to yell about how much someone spent on toast has to turn over all of their receipts to Twitter for a complete review, I’m sorry, but that’s how it works.


Facebook is personalizing the navigation bar on its mobile app

The row of things you do not use on the bottom navigation bar inside the Facebook app will now be personalized do you, raising hopes that, someday, I may stop accidentally tapping on the Marketplace tab only to see someone in Fremont selling a refurbished fax machine for $30.

How to get the new Facebook plane reaction

Facebook accidentally launched a new reaction emoji, and that emoji is a plane.


Is Facebook evil? Everything bad about Facebook is bad for the same reason

I can’t decide if this take is nuclear or not. On one hand, it compares Facebook, unfavorably, to the Third Reich. On the other hand, its central point, that Facebook’s willful passivity in the face of most things enables its worst actors, seems hard to argue with. On a third hand, the essay is poorly constructed and argued. But we were a little low on takes over here. Anyway:

That brings us back to Facebook. It has its own grand project—to turn the human world into one big information system. This is, it goes without saying, nowhere near as terrible as the project of the thousand-year Reich. But the fundamental problem is the same: an inability to look at things from the other fellow’s point of view, a disconnect between the human reality and the grand project.

And finally…

This Instagram Account Shows How Instagram Photos Look the Same

On this day where we ponder the meaning of “coordinated inauthentic comment,” I enjoyed this look at @insta_repeat, an anonymous Instagram account that finds people taking the same picture and assembles them into a grid of shame. We think of photography as a largely original medium, and yet so often certain kinds of photos on Instagram come to feel like collectible pokémon: here’s my shot in the pit of yellow balls at the Museum of Ice Cream, here’s my arm holding up a large ice cream cone against a colorfully painted wall, etc.

Anyway, go forth and be authentic, y’all!

Talk to me

Send me: tips, comments, coordinated inauthentic content. casey@theverge.com

Senate warns tech companies on foreign interference: ‘Time is running out’

“Time is running out.”

That was the message from Sen. Jack Reed (D-RI) on Wednesday at a meeting of the Senate Select Committee on Intelligence. The subject of the hearing, which came 96 days before the midterm elections: Foreign Influence Operations and their use of Social Media Platforms.

And the day’s witnesses, which included directors and researchers from organizations including the German Marshall Fund, the Oxford Internet Institute, and New Knowledge, were in agreement: America needs a legislative solution to counter the influence campaigns now underway on social platforms.

My colleague Makena Kelly, who watched Wednesday’s hearing, captured the scope of the problem (emphasis mine):

Kelly also produced a surprising statistic: far-right and far-left bot accounts produce 25 to 30 times more posts and messages per daythan standard, authentic user accounts. Committee members and panelists said that the flood of content aided in increasing the divide among the American populace with memes and posts surrounding highly emotional issues like the Black Lives Matter movement. “These types of asymmetric attacks — which include foreign operatives appearing to be Americans engaging in online public discourse – almost by design slip between the seams of our free speech guarantees and our legal authorities and responsibilities,” Warner said.

This flood of fake content is what researcher Renee DiResta, who testified Wednesday, calls “computational propaganda.” She told the committee that “addressing this asymmetric threat requires a 21st century Information Operations Doctrine, the implementation of a global real-time detection and deterrence strategy, and the cooperation of private industry, press, law enforcement, and the intelligence community.” She painted a dark picture:

The evolution of social media propaganda and influence techniques will bring serious threats. We should anticipate an increase in the misuse of less popular and less resourced social platforms, and an increase in the use of peer-to-peer messaging services. We believe that future campaigns will be compounded by the employment of witting or unwitting U.S. Persons through whom these state actors will filter their propaganda, in order to circumvent detection by social platforms and law enforcement. We should anticipate the incorporation of new technologies, such as videos and audio produced by artificial intelligence, to supplement these operations, making it increasingly difficult for citizens to trust their own eyes.

The news came on the same day that the man who has led Facebook’s security efforts, and played a key role in disclosing the current influence campaigns underway to the press yesterday, announced he would leave the company. Alex Stamos’ departure as chief security officer had been expected since earlier this year, and he’ll stay on through August 17th. But while I suspect Facebook believes Stamos has gotten perhaps more than his share of credit for the company’s cybersecurity efforts, the loss still stings: Stamos had a lot of credibility with the press, who sees him a straight shooter, and he spoke about his work with an urgency and moral seriousness that were unusual for a corporate executive.

Facebook said that the dedicated security team that Stamos led would be dissolved, under the idea that it would be better to embed security engineers into every part of the organization than maintain a standalone force. A spokesman told me:

We expect to be judged on what we do to protect people’s security, not whether we have someone with a certain title. We are not naming a new CSO, since earlier this year we embedded our security engineers, analysts, investigators, and other specialists in our product and engineering teams to better address the emerging security threats we face. Alex helped us manage this transition. We will continue to evaluate what kind of structure works best as we continue to invest heavily in security to protect people on our services.

It may well be that Facebook doesn’t need a high-profile leader of its security team to protect the platform. But on a day when top experts in the field warned us about the vast and increasing scope of the problem, it felt unsettling.

Congress will get a chance to ask about it directly. Members of the intelligence committee will meet with senior executives from Facebook, Twitter, and Google on September 5th.


A Senator Actually Referenced The “This Is Fine” Meme In His Closing Statements About Russian Interference

Someday, it will not be news when a sitting US senator references a popular meme in discussing the day’s events. Today is not that day:

“Some feel that we as a society are sitting in a burning room, calmly drinking a cup of coffee, telling ourselves, ‘This is fine,’” Burr said about Russia’s interfering efforts. “That’s not fine.”

Alex Jones, Pursued Over Infowars Falsehoods, Faces a Legal Crossroads

Elizabeth Williamson has an infuriating story about how Alex Jones targeted the parents of a victim of a six-year-old victim of the Sandy Hook shooting, broadcasting directions to their home, inciting a viewer to stalk them, and forcing them to move seven times, so that they can no longer visit their son’s grave. Jones is now suing them for $100,000 in court costs.

Spotify is removing some Alex Jones podcasts because they are ‘hate content’

And speaking of Jones, add Spotify to the list of media services that are tentatively disciplining him in ways that will do him zero lasting harm. Here’s Kurt Wagner:

While Spotify is taking action, Jones and his podcast aren’t gone for good. A Spotify spokesperson declined to share what episodes were removed or what specific content triggered the company’s action, but the podcast is still available through the service.

Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal

Ryan Gallagher reports that Google is very close to relaunching in China. Some employees are mad, Greg Sandoval reported separately. Expect the controversy over this to grow:

Teams of programmers and engineers at Google have created a custom Android app, different versions of which have been named “Maotai” and “Longfei.” The app has already been demonstrated to the Chinese government; the finalized version could be launched in the next six to nine months, pending approval from Chinese officials.

Campaigns Enter Texting Era With a Plea: Will U Vote 4 Me ?

The hot new social network in politics is the SMS message, Kevin Roose reports:

“There’s no question that texting is the breakout tech of 2018,” said Eric Wilson, a Republican digital strategist and founder of Learn Test Optimize, a newsletter about political marketing. “There’s so much competition in the inbox, we’re looking for other channels. For now, that’s text messaging.”

Mr. Wilson recently gathered a group of political strategists in a Washington office to talk, over dumplings and craft beers, about the sunny prospects for text messaging. The group agreed that social media platforms like Facebook and Twitter were becoming crowded, and that text messages, which are read at higher rates than emails and are less invasive than phone calls, were a promising alternative.

’QAnon’: A deranged conspiracy cult leaps from the Internet to Trump’s ‘MAGA’ tour

The Post has a good explainer on how a demented fascist fantasy about the ultimate triumph of Donald Trump over our democratic institutions leapt from the darkest corners of 4chan onto T-shirts at a Trump rally this week.


Snopes is feuding with one of the internet’s most notorious hoaxers

Snopes repeatedly debunked a fake news empire, causing them to lose the bulk of their Facebook distribution and basically drive them out of business. The empire is now very mad at Snopes.

What’s the Best App for Making Memes?

The best app for making memes is the human heart! At least, that’s what I say. Taylor Lorenz says existing meme-making tools leave much to be desired:

In the meantime, some memers have found the current suite of mobile applications so lacking that they choose to create their memes on desktop computers instead. “On your phone, you’re never going to be able to do as much as you could as on a computer,” says Noam, who memes under the account @listenintospitandgettingparamoredon.


Facebook and Instagram add dashboards to help you manage your time on social apps

I wrote about Facebook and Instagram’s new Time Well Spent-inspired usage dashboards, which also let you set in-app reminders to stop idly thumbing through your feed after an interval you specify. Note also that this is the first time Facebook and Instagram ever introduced a product together — a sign of Facebook’s increasing influence over Instagram.

Snapchat launches new Lenses that respond to your voice

It’s like Alexa for your face, Ashley Carman reports:

Snapchat’s newest AR Lenses allow users to issue voice commands that’ll make them come to life. Some lenses will ask users to say words like, “hi,” “love,” or “wow,” which will cause the lenses to animate.


Infowarzel 8/1

Charlie Warzel nails what’s so disturbing about the ongoing influence campaign designed to undermine the legitimacy of the left:

The disturbing reality, here, is how the trolls/meddlers win no matter the outcome. if they operate in secrecy, they meddle and succeed. If they are discovered, they cast doubt on the greater political and cultural conversation across these platforms. If they get shut down, they take down real advocacy orgs with them. Dark times.

Stumbles? What Stumbles? Big Tech Is as Strong as Ever

Farhad Manjoo says the so-called techlash has been toothless. Especially when it comes to Facebook:

In a strange way, the social network’s troubles only underscored its dominance. Even after its stock crash, Facebook remains the fifth most valuable corporation in the American markets, ahead of Berkshire Hathaway, and there are almost no serious calls for its chief executive to resign, as you might expect for any other company experiencing such a loss. That’s because the company reported little to cause experts to alter their long-term outlook. Pretty much everyone who studies Facebook believes that it will hold its grip on the culture and the advertising industry for the foreseeable future.

“This is one of the most profitable business models I’ve ever seen, and that really hasn’t changed,” said Mark Mahaney, an analyst at the firm RBC Capital. He added that Facebook’s stock now “may be the single most attractively priced asset across technology.”

The Sadness of Deleting Your Old Tweets

One of my favorite former colleagues, Emily Dreyfuss, regrets deleting her old tweets:

I realized I was upset because my old tweets, like those old and probably terrible poems, symbolized a phase in my life, which deleting brought to an abrupt and final end. Those tweets were my late 20s, my era of striving, before I became a mother and stopped staying up late for Weird Twitter, and before I learned that the internet is not a safe haven for silly jokes but can be a deadly serious extension of the real world. Deleting those tweets was the nail in the coffin of my innocent relationship with the internet.

And finally…

Twitter Introduces Red X Mark To Verify Users It’s Okay To Harass

This made me laugh:

“This new verification system offers users a simple, efficient way to determine which accounts belong to total pieces of shit whom you should have no qualms about tormenting to your heart’s desire,” said spokesperson Elizabeth James, adding that the small red symbol signifies that Twitter has officially confirmed the identity of a loathsome person who deserves the worst abuse imaginable and who will deliberately have their Mute, Block, and Report options disabled.

Thank you for not muting, blocking, or reporting The Interface.

Talk to me

Send me questions, comments, tips, and computational propaganda. casey@theverge.com

Facebook’s forecast for the future looks suddenly bleak

On Wednesday, for the first time in three years, Facebook failed to meet Wall Street’s expectations for revenue and user growth. The company’s user base of 185 million users in the United States and Canada remained flat over the last quarter, and added just 22 million users worldwide — the lowest number of additions since at least 2011. Facebook stock opened on an almost 20 percent loss Thursday morning, before it started slowly climbing up again.

On one hand, you couldn’t call the news a complete surprise. As Mike Isaac noted, Facebook has warned for months that the changes it was making to the News Feed would reduce its growth. And Mark Zuckerberg told analysts that the company’s efforts to fix its platform would have a meaningful, and negative, effect on revenue.

There’s also the limit of the human population to consider. There are about 3 billion people who have access to Facebook, and Facebook reaches 2.23 billion of them. Facebook has a variety of plans to expand internet access, but it has been slow going, and it recently killed a high-profile plan to build internet delivery drones.

Still, the news seemed to catch investors off guard. Facebook’s chief financial officer, David Wehner, warned that the bad news would continue indefinitely. “Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4,” he said on a conference call.

Facebook’s stock price declined sharply after hours Wednesday

Shira Ovide, writing in Bloomberg, put it best:

If what the company predicts comes to pass, the internet’s best combination of fast revenue growth and plump profit margins is dead. All at once, it seemed, reality finally caught up to Facebook.

Was Tuesday, in fact, the day when almost two years of nonstop negative headlines began to show up in Facebook’s core business? The results announced Wednesday marked the first full quarter since the Cambridge Analytica data privacy scandal. It also saw the rollout of Europe’s General Data Protection Regulation, which drove away 1 million users, Zuckerberg told analysts.

Whatever the cause, it appears that Facebook may have peaked in North America. And while efforts to improve the platform will surely continue, the company has a full slate of negative headlines to anticipate. The Justice Department and FBI are both investigating Facebook over Cambridge Analytica. The Security and Exchange Commission and Federal Trade Commission are conducting investigations of their own.

Meanwhile, the debate over how Facebook should handle misinformation, which led Zuckerberg to defend the speech rights of Holocaust deniers last week, is still raging. No one of these things is likely to dislodge Facebook from its place at the center of online communication. But it’s fair to say that the cumulative effect so far has been greater than expected.

Is Facebook invincible?“ asked Kurt Wagner, my colleague at Recode, in his earnings preview Tuesday. Wednesday’s results made it evident that the answer is no.


YouTube issues a new strike against Alex Jones’ channel over hate speech and child endangerment

I broke the news that Alex Jones’ YouTube channel has a new strike against it after the company removed four videos from the site that violated its community standards:

Two videos contained hate speech against Muslims, and a third contained hate speech against transgender people, sources said. A fourth showed Jones mocking a child who was pushed to the ground by an adult man, under the headline “How to prevent liberalism.” All four of the videos are currently posted on Infowars.

Senate Intel Panel Plans Hearing With Facebook, Google, And Twitter Execs

The Senate Intelligence Committee is planning on holding two hearings on social media as part of its Russia investigation. Sheryl Sandberg and Jack Dorsey are expected to attend, and maybe Sundar Pichai as well, report Emma Loop and Ryan Mac.

Facebook removes pages of Brazil activist network before elections

Facebook took down a network of pages and accounts used by a right-wing Brazilian activist group, the company said, after identifying many of its accounts as fake.

China Said to Quickly Pull Approval for New Facebook Venture

Facebook’s $30 million “innovation hub” in China was a viable enterprise for all of a day, Paul Mozur reports.

An open letter to Mark Zuckerberg from the parents of a Sandy Hook victim

Leonard Pozner and Veronique De La Rosa, parents of Noah Pozner, ask Facebook to do more to remove conspiracy content from the platform. Their specific requests:

– Treat victims of mass shootings and other tragedies as a protected group, such that attacks on them are specifically against Facebook policy.

– Provide affected people with access to Facebook staff who will remove hateful and harassing posts against victims immediately.

Q&A on Election Integrity

Here’s the transcript of yesterday’s conference call with reporters about election integrity. Elsewhere, Josh Constine follows up with some unanswered questions.

The internet isn’t why Trump won, Stanford and Brown study finds.

Trump actually did worse than previous Republican candidates among internet users and people who got campaign news online, Will Oremus reports.

This doesn’t mean the internet was irrelevant to the 2016 campaign, Shapiro told me. “The question is not whether the internet is having any impact on politics—it surely is—but whether it deserves the top billing it often gets in discussions about the election,” he said.

’I felt disgusted’: inside Indonesia’s fake Twitter account factories

Kate Lamb reports on the “buzzer teams” hired by the Indonesian government to churn up racial divisions:

For several months in 2017 Alex, whose name has been changed, alleges he was one of more than 20 people inside a secretive cyber army that pumped out messages from fake social media accounts to support then Jakarta governor Basuki Tjahaja Purnama, known as “Ahok”, as he fought for re-election.

“They told us you should have five Facebook accounts, five Twitter accounts and one Instagram,” he told the Guardian. “And they told us to keep it secret. They said it was ‘war time’ and we had to guard the battleground and not tell anyone about where we worked.”

Twitter is “shadow banning” prominent Republicans like the RNC chair and Trump Jr.’s spokesman

This is a false headline, and a misleading story, about what was essentially a bug that prevented the names from some Republican leaders from showing up in the drop-down search field. Head of product Kayvon Beykpour sought to clarify the situation later in the day in a Twitter thread. But basically no, Twitter isn’t “shadow banning” anyone although now there will probably be a sham Congressional hearing about it.


Facebook’s top lawyer is leaving as the company still grapples with election aftermath and a federal investigation

Facebook General Counsel Colin Stretch is leaving at the end of the year. He joined Facebook in 2010.

Departing Facebook Security Officer’s Memo: “We Need To Be Willing To Pick Sides”

Forgot to include this in yesterday’s edition: BuzzFeed got a hold of Facebook Chief Security Officer Alex Stamos’s note from the spring in which he said he would leave. It basically tracks with what he was tweeting around the time, which is that he would leave in August. It’s almost August, though — is there a plan?

Twitter will lock your account if you change your display name to Elon Musk

Twitter belatedly responds to the very real issue of people changing their display names to trick people into sending them cryptocurrency, my colleague Nick Statt reports:

Twitter has implemented a new method for combating cryptocurrency scammers: it now automatically locks unverified accounts that change their display name to Elon Musk. If you have a non-verified account that is not associated with a phone number, changing your display name to that of the SpaceX and Tesla CEO will result in an immediate lock out. Twitter will then ask you to pass a CAPTCHA test, as well as provide a phone number, to regain access.

When Trespassing for YouTube Fame Keeps You on the Run from the Cops

Eddie Kim takes us inside the exciting world of committing minor crimes for YouTube views.

Snapchat starts to syndicate video shows on Discover

Not enough publishers want to make shows for Snapchat Discover, so Snap is filling in the corners with syndicated crap.


YouTube expands its VR app to Samsung’s Gear VR device and lets you watch videos with strangers

I’m sure this will be useful to someone, but I’m not quite sure who, part one:

YouTube is expanding its virtual reality app to support Samsung’s Gear VR devices, and it’s also adding a new feature that lets users watch a video together and chat. If you own a Gear VR device, you’ll be able to download the app from the Oculus Store beginning this week, Google announced today in a blog post.

Facebook’s ‘Watch Party’ rolls out to all, letting Groups watch videos together

I’m sure this will be useful to someone, but I’m not quite sure who, part two.


Tech Companies Like Facebook and Twitter Are Drawing Lines. It’ll Be Messy.

Farhad Manjoo says that calls for more aggressive content moderation will only serve to make Facebook more powerful — without making it any more accountable:

Roseanne, Dan Harmon, and the End of Hollywood’s Twitter Era

Daniel D’Addario says the age of celebrity tweeting could be coming to an end:

The solution, for everyone involved and everyone not yet involved, seems to be scaling back on social media. The rewards of openness–access that runs both ways between talent and fans, a window into the creative process–can only exist in a marketplace in which context is understood and appreciated. That Twitter strips away context, that it rips jokes and statements out of time and presents them as individual links that make a botched witticism seem as potent and nasty as a destructive piece of politicized hate, is part of its value proposition. And it’s trained all of us, but especially power users on the right, to see a flattened world, where sensibility and meaning matter less than how offended one can purport to be. If more and more creators walk away from Twitter and similar platforms, it’ll be a potent demonstration of how the promise of the social web — openness and accessibility — cannot jibe with the artist’s need to work things out.

And finally …

The Big Business of Being Gwyneth Paltrow

In its own delightful way, this majestic Taffy Brodesser-Akner profile of Gwyneth Paltrow’s empire reveals GOOP as the Infowars of the wellness set. Highly recommended:

G.P. didn’t understand the problem. “We’re never making statements,” she said. Meaning, they’re never asserting anything like a fact. They’re just asking unconventional sources some interesting questions. (Loehnen told me, “We’re just asking questions.”) But what is “making a statement”? Some would argue — her former partners at Condé Nast, for sure — that it is giving an unfiltered platform to quackery or witchery. O.K., O.K., but what is quackery? What is witchery? Is it claims that have been observed but not the subject of double-blind, peer-reviewed studies? Yes? Right. O.K., G.P. would say, then what is science, and is it all-encompassing and altruistic and without error and always acting in the interests of humanity?

Talk to me

Questions? Comments? Stock tips? casey@theverge.com

UPDATE 7/26/2018 10:26AM: This piece has been updated to include Thursday morning Facebook stock activities.

How conspiracy sites keep outsmarting big tech companies

Two weeks ago, CNN’s Oliver Darcy put a question to Facebook executives during an event in New York: how can Facebook say it’s serious about fighting misinformation while also allowing the notorious conspiracy site Infowars to host a page there, where it has collected nearly 1 million followers? The tension Darcy identified wasn’t new. But it crystallized the a contradiction at the heart of Facebook’s efforts to clean up its platform. And we’ve been talking about it ever since.

Late Thursday night, Facebook took its first enforcement action against Jones since the current debate started. The company removed four videos that were posted to the pages of Alex Jones, Infowars’ founder and chief personality, and Infowars itself. Jones, who had violated Facebook’s community guidelines before, received a 30-day ban. Infowars’ page got off with a warning, although Facebook took the unusual step of saying the page is “close” to being unpublished.

The move came a day after YouTube issued a strike against Jones’ channel, after removing four videos itself. (Facebook won’t say which videos it removed, but the rationale it used to remove them — attacks on someone based on their religious affiliation or gender identity, and showing physical violence — suggests they are the same ones YouTube removed.)

These posts were removed for hate speech and violence, not misinformation. It’s likely Facebook would have removed them even without the extra attention on Infowars. But Jones’ behavior in the wake of recent enforcement actions shows how easily bad actors can skirt rules that were designed in the belief that most users will generally stick to them.

YouTube, for example, has a “three strikes policy.” Post three bad videos and your channel gets banned. But there’s a huge loophole, and Jones exploited it. As I reported earlier this week, users must log in to YouTube and view the strike against them before it gets counted. And if they posted multiple offending videos before they logged in, those offending videos are “bundled” into a single strike.

The idea is that the disciplinary process should educate a first-time offender. If someone posted three videos that violated copyright, for example, they might not understand what they did until YouTube notifies them. Better to give them a second chance, the thinking goes, than to ban their account instantly for three simultaneous violations.

Similarly, YouTube allows strikes to expire three months after they are issued. The idea is to give users a chance to rehabilitate themselves after they make a mistake. But viewed through the lens of Infowars, the policy begins to look like a free pass to post hate speech every 90 days or so.

Jones has proven himself capable at evading platforms’ well intentioned policies. The YouTube strike came with a ban on using the platform’s live-streaming features for 90 days — so Jones simply began appearing on the live streams of his associates, such as Ron Gibson. Here’s Sean Hollister at CNET:

YouTube is removing these streams and revoking livestreaming access to channels that host them, but it hasn’t stopped Infowars yet. Though YouTube shut down a livestream at Ron Gibson’s primary YouTube channel, he merely set up a second YouTube channel and is pointing people there.

Meanwhile, Facebook’s profile-specific discipline similarly ignores Jones’ ability to roam across pages. Jones is banned from accessing his personal profile, but he still gets to appear on his daily live show, which is broadcast on Infowars and “The Alex Jones Channel.” The solution to being banned from one profile is simply to broadcast yourself from another one.

There were good reasons for tech platforms to set up disciplinary policies that strived to forgive their users. But given how easily they can be gamed, they would appear to be ripe for reconsideration.


Britain’s Fake News Inquiry Says Facebook And Google’s Algorithms Should Be Audited By UK Regulators

An interim report from the House of Commons Digital, Culture, Media and Sport Committee, which leaked ahead of a planned Sunday release, calls for much stricter scrutiny of tech platforms like Facebook. Proposals include giving the government oversight of ranking algorithms, requiring online publications to be “accurate and impartial,” and making platforms liable for “harmful and illegal content.” All of that would be a big deal; this one bears watching.

Facebook deletes hundreds of posts under German hate-speech law

In the first half of the year, Facebook received 1,704 complaints under a new German law that bans online hate speech, Reuters reports. The company removed 262 blog posts during that period, the company said in a German-language blog.

Trump appointee condemns Mark Zuckerberg’s comments on Holocaust deniers

Paul Packer, the chairman of the U.S. Commission to Preserve America’s Heritage Abroad, wrote a letter to Zuckerberg calling his comments about Holocaust deniers “dangerous” and the company’s policies “inexcusable.”

Setting the record straight on shadow banning

Shortly after I sent off yesterday’s newsletter, Twitter posted a message about the “shadow ban” controversy:

We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile). And we certainly don’t shadow ban based on political viewpoints or ideology.

Mueller Examining Trump’s Tweets in Wide-Ranging Obstruction Inquiry

Trump’s tweets could come back to haunt him in the Mueller investigation.


Twitter warns fake account purge to keep erasing users, shares drop 19 percent

Twitter lost 1 million users in the past quarter, the company said today as part of its earnings report, though at least some of that seems to be tied to efforts to remove bad actors from the platform. This is a good thing, but the stock tumbled anyway.

Instagram not an instant fix for ailing Facebook

Interesting nugget about Instagram monetization from Paresh Dave:

Instagram and Facebook users see about the same number of ads, but Instagram ad prices are half of what Facebook charges because of the limited number of advertisers vying for spots on Instagram, four ad buyers said.

Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them?

Facebook says it takes good care of its content moderators:

All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counseling. And all reviewers have full health care benefits.

We also pay attention to the environment where our reviewers work. There’s a misconception that content reviewers work in dark basements, lit only by the glow of their computer screens. At least for Facebook, that couldn’t be further from the truth. Content review offices look a lot like other Facebook offices. And because these teams deal with such serious issues, the environment they work in and support around them is important to their well-being.


Twitter’s Algorithm Problem Is Not a Bug

Stop calling Twitter’s search ranking features a “bug,” says Brian Feldman.

It’s not a bug. We need to be clear about this — the issue here is not a bug, glitch, error, or whatever other synonym you can conjure up. Calling this a “bug” implies an outcome contrary to what should be expected by the code, and implies Twitter made a mistake. This is not what we normally think of when it comes to the sorting algorithms that power Twitter, or Facebook’s News Feed, or Google’s search engine, or YouTube’s recommendation system. These are programs designed to anticipate what a user wants based on a myriad number of signals and behaviors, and if the results they serve up are imperfect to a few users, that doesn’t mean the software is buggy. The results might not be politically helpful to a company, or they might be unpredictable. But they’re not a mistake.

Why unskippable Stories ads could revive Facebook

Josh Constine argues that if Facebook wants to make more money from Instagram, it’s going to have to stop letting you skip ads. Gee thanks a lot Josh!

Talk to me

Questions? Comments? Weekend plans? casey@theverge.com