Instagram’s verification system is useful, fair, and Twitter should copy it

Who deserves a “verified” badge? On Twitter, the issue has been surprisingly contentious. Last November, the company briefly verified the account of Jason Kessler, a white supremacist who organized the 2017 Unite the Right rally in Charlottesville, VA. Reaction to Kessler getting his badge was swift and negative, and Twitter used the occasion to say — as it now likes to do, all the time — that it would use the occasion to rethink everything. In the case of verification, the company would simply stop verifying anyone until it could say for certain what it meant to be verified.

“Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance,” the company said in a tweet. “We recognize that we have created this confusion and need to resolve it. We have paused all general verifications while we work and will report back soon.”

As I’ve written before, Twitter’s verification process was a mess from the start. It began as a panicked reaction to former St. Louis Cardinals manager Tony LaRussa suing the company for allowing someone to impersonate him. (LaRussa eventually dropped the suit.) But verification evolved into something both more important and more nebulous, as I wrote last year:

Over time, though, Twitter began granting special privileges to verified users. They got analytics, which were otherwise available only to advertisers, showing them how their tweets performed. They got a tab showing only their interactions with other verified users — a ham-fisted way of dealing with the abuse that celebrities received from regular accounts. When Twitter introduced new keyword filters designed to combat abuse, verified users got them first.

Along the way, Twitter said very little about the criteria for verification. For years, there was no obvious way to apply. Either Twitter reached out to you, or you got to know someone at the company. And so the verification badge came to carry a sheen of authority: this person, the badge suggested, is a known quantity. This is an account that Twitter trusts.

The real trouble began in January 2016, when Twitter removed the badge from the profile of noxious right-wing personality Milo Yiannopoulos. No one disputed that the account belonged to Yiannopoulos. By removing his badge for bad behavior, the company suggested that the blue checkmark was also a mark of approval.

The next year, after the Kessler debacle, Twitter said it would begin taking users’ offline behavior into account in determining who should be verified. This March, CEO Jack Dorsey said Twitter could eventually let everyone verify their accounts. And then last month, in a comical final note to the entire affair, Twitter announced that it was abandoning the verification project indefinitely because it was busy working on other things.

I apologize for the lengthy preamble here. But in a world where trust in social feeds is collapsing, a workable verification system could serve as a powerful ally. And so I was heartened to see today that Instagram had quietly tackled the verification process on its own — and come up with something workable and useful.

The short version is that anyone can now apply to be verified on Instagram, though the company will continue to verify only those who meet a high set of standards, including “notability.” (Sorry, normals.) Cofounder Mike Krieger laid out the particulars in a blog post:

To be verified, an account must comply with Instagram’s Terms of Service and Community Guidelines. We will review verification requests to confirm the authenticity, uniqueness, completeness and notability of each account. Visit the Help Center to learn more about Instagram’s verification criteria.

To access the verification request form, go to your profile, tap the menu icon, select “Settings” at the bottom and then choose “Request Verification.” You will need to provide your account username, your full name and a copy of your legal or business identification. This information will not be shared publicly.

Here Krieger has laid out, in plain language, what it means to be verified on Instagram. The company has verified that you are who you say you are; that you only have one account; and that the account is “notable” in some way. What does that mean? The Help Center lays it out: “Currently, only Instagram accounts that have a high likelihood of being impersonated have verified badges.”

The policy isn’t democratic in the sense of Dorsey’s spring proclamation that someday everyone could have a badge on Twitter. But it does open the door to more people getting badges on Instagram, and doing its part to improve trust on the service.

Notably, the announcement came with two other trust-improving measures. One, Instagram will now allow users to improve the security of their accounts by using third-party authenticator apps, an improvement over the more easily hacked SMS codes. And two, a new “About this account” feature attached to profiles will show you information that is helpful in identifying fake accounts:

There, you will see the date the account joined Instagram, the country where the account is located, accounts with shared followers, any username changes in the last year and any ads the account is currently running.

These are smart measures that help Instagram build trust in its user base, while making it harder for bad actors to exploit its platform. By approaching the question of verification as narrowly as possible — focused only on eliminating the confusion that comes from impersonation and parody — Instagram arrived at a reasonable way to invite its entire user base to apply.

Of course, Instagram’s historical status as a way to share cheerful brunch photos with friends means it hasn’t been stress-tested in quite the same way Twitter has. Yiannopoulos has an Instagram account that would seem to meet all of Krieger’s stated requirements for verification; will Instagram grant him a badge? If yes, than we’ll see a reprise of the attacks Twitter faced when it verified Kessler. If not, then expect to see a bunch of conservatives who aren’t granted badges taking center stage in an eventual Congressional hearing about Instagram “shadow banning” Republicans.

But until then, I’m struck by how a question that reduced Twitter to paralysis got such a straightforward answer from its rival. Journalists often wag their fingers when tech companies copy one another — but if Twitter wants to lift this idea wholesale, I promise to limit myself to a respectful nod.

Democracy

Trump claims Google is suppressing positive news about him and ‘will be addressed’

President Donald Trump tweeted this morning that Google search results for “Trump News” have been “rigged” to show critical coverage of him, and said the situation “will be addressed.” Google explained that search results are not “rigged” in the political sense of the word. The Times’ Maggie Haberman pointed out that Trump doesn’t actually use a computer. Matthew Gertz pointed out that the entire tweetstorm seemed to be predicated on a segment on Fox News’ Lou Dobbs Tonight, which itself was based on a nonsensical chart promoted by the hyperpartisan right-wing news outlet PJ Media. Trump’s economic adviser said “we’re taking a look at it.” An exasperated Fox News’ Shepard Smith stared into the camera and shouted, “what is he talking about?!” Trump then sent out a fundraising pitch based on his comments.

World’s Leading Human Rights Groups Tell Google to Cancel Its China Censorship Plan

A coalition of 14 organizations — including Amnesty International, Human Rights Watch, Reporters Without Borders, the Committee to Protect Journalists, the Electronic Frontier Foundation, and the Center for Democracy and Technology — wrote an open letter to Google on Tuesday calling for the company to abandon plans to launch a censored version of its search engine in China. Ryan Gallagher reports:

Google is a member of the Global Network Initiative, or GNI, a digital rights organization that works with a coalition of companies, human rights groups, and academics. All members of the GNI agree to implement a set of principles on freedom of expression and privacy, which appear to prohibit complicity in the sort of broad censorship that is widespread in China. The principles state that member companies must “respect and work to protect the freedom of expression rights of users” when they are confronted with government demands to “remove content or otherwise limit access to communications, ideas and information in a manner inconsistent with internationally recognized laws and standards.”

Following the revelations about Dragonfly, sources said, members of the GNI’s board of directors – which includes representatives from Human Rights Watch, the Center for Democracy and Technology, and the Committee to Protect Journalists – confronted Google representatives in a conference call about its censorship plans. But the Google officials were notresponsive to the board’s concerns or forthcoming with information about Dragonfly, which caused frustration and anger within the GNI.

Clashes Over Ethics At Major Tech Companies Are Causing Problems For Recruiters

Tech companies’ government work is also causing problems for recruiters, Caroline O’Donovan reports:

Meshulam’s refusal to consider working for Amazon until the company addresses ethical concerns that employees and outside watchdogs have raised is part of a larger trend. Using the hashtag #TechWontBuildIt, a handful of tech workers on Twitter have shared how they’re rejecting interviews with companies like Amazon and Salesforce, either because they disagree with the company’s practices or don’t want to help build its products.

Trained programmers, software engineers, and data scientists are in notoriously high demand in the tech industry. Companies spend millions of dollars on recruiting efforts every year and offer a dizzying array of perks and benefits (lengthy parental leave, infertility treatment, free beer, unlimited vacation) to entice workers. That means prospective employees have leverage — and some of them are trying to use it to get these companies to change their ways. The actions of a handful of individuals are unlikely to steer corporate policy, but the trend could signal a looming recruiting pipeline problem if the companies don’t change tack.

Dozens at Facebook Unite to Challenge Its ‘Intolerant’ Liberal Culture

More than Facebook 100 employees have formed a group to promote conservative views and political debate, Kate Conger and Sheera Frenkel report:

Mr. Amerige proposed that Facebook employees debate their political ideas in the new group — one of tens of thousands of internal groups that cover a range of topics — adding that this debate would better equip the company to host a variety of viewpoints on its platform.

“We are entrusted by a great part of the world to be impartial and transparent carriers of people’s stories, ideas and commentary,” Mr. Amerige wrote. “Congress doesn’t think we can do this. The president doesn’t think we can do this. And like them or not, we deserve that criticism.”

Social media firms manage expectations for stopping foreign influence campaigns

U.S. tech companies would like you to know that there is only so much they can do with respect to fighting foreign influence campaigns, Ali Breland reports. But I liked this note of optimism from researcher Renee DiResta, with respect to the rising cost of influence campaigns:

DiResta said costs are rising for influence campaigns. In 2016, she said, the process was much easier: Russia did very little to cover their tracks and they were able to spread their message through basic, cheap and easy to use bots.

It’s not that simple anymore.

“If you want to get something trending, you need something that evades Twitter bot detection mechanisms,” DiResta said. “To do that you need people typing original things. So it takes a lot more work to evade bot detection now that it did in 2016.”

Twitter suspends more accounts for “engaging in coordinated manipulation”

Twitter suspended more accounts for “coordinated manipulation,” and for the most part they were tweeting leftist anti-Trump stuff, Catherine Hsu reports:

Following last week’s suspension of 284 accounts for “engaging in coordinated manipulation,” Twitter announced today that it’s kicked an additional 486 accounts off the platform for the same reason, bringing the total to 770 accounts.

While many of the accounts removed last week appeared to originate from Iran, Twitter said this time that about 100 of the latest batch to be suspended claimed to be in the United States. Many of these were less than a year old and shared “divisive commentary.” These 100 accounts tweeted a total of 867 times and had 1,268 followers between them.

Microsoft’s president explains how Gab’s shutdown notice went from customer support to his desk

Microsoft nearly shut down hate-oriented Twitter alternative Gabthis month when it found genocidal anti-Semitic posts on the network. On The Vergecast this week, Microsoft’s president, Brad Smith, revealed that the posts were found by Microsoft customer support agents, and that top executives learned about the takedown notice they sent Gab after reading about it on The Verge and other tech sites.

“While we were sleeping on the West Coast of the United States, an employee in India had sort of turned out an email that went to Gab that said, ‘We’ve spotted some content, and under our policy, you have to address it in 48 hours or you risk being cut off.’”

Smith said executives reviewed the decision after being contacted by journalists, including The Verge. But ultimately, he said, there was little to review: it was “a relatively straightforward judgment call because the content was so extreme.”

Elsewhere

Yahoo, Bucking Industry, Scans Emails for Data to Sell Advertisers

If you work at Facebook, and are constantly explaining to your exasperated friends and family members that you are not selling their data, here’s a story to make you pound your desk. It turns out Yahoo — now a division of Oath! — has been scanning Yahoo Mail inboxes for years and then selling that data to advertisers. Doug MacMillan, Sarah Krouse, and Keach Hagey have the scoop:

Yahoo’s owner, the Oath unit of Verizon Communications Inc., VZ -0.36% has been pitching a service to advertisers that analyzes more than 200 million Yahoo Mail inboxes and the rich user data they contain, searching for clues about what products those users might buy, said people who have attended Oath’s presentations as well as current and former employees of the company.

Oath said the practice extends to AOL Mail, which it also owns. Together, they constitute the only major U.S. email provider that scans user inboxes for marketing purposes.

The Internet of Garbage by Sarah Jeong

The Verge republished Sarah Jeong’s book The Internet of Garbage. First published in 2015, the book examines how online harassment works, and why the structure of the internet has enabled it to flourish.

YouTube, Twitch creators call out Logan Paul-KSI fight copyright strikes

YouTubers punching one another in the face is catnip to the large swath of YouTubers who make videos about YouTube drama. After this weekend’s Logan Paul-KSI face-punching contest, creators raced to upload their reaction videos, complete with footage of the event. Others streamed the fight in its entirety on Twitch, in blatant circumvention of the rules of the site. KSI’s team sicced a bunch of copyright goons on them all, and now everyone is mad, Julia Alexander reports:

Most people in the community can agree that streaming a fight, one which KSI’s team invested a fair bit of money into along with sponsors, isn’t cool. They can also agree, however, that not allowing people to make commentary videos about the fight is equally upsetting.

Facebook vows to run on 100 percent renewable energy by 2020

Remove “climate change” from the list of things that you blame Facebook for. Shannon Liao reports:

Facebook announced today that it’s reducing its greenhouse gas emissions by 75 percent and will make its operations run on 100 percent renewable energy by the end of 2020. These efforts are part of its pledge to combat climate change.

The company has signed contracts for more than 3 gigawatts of new solar and wind energy since it began such efforts in 2013, it writes in a blog post. These wind and solar projects are built on the same grid as Facebook data centers, including centers in Oregon, Virginia, New Mexico, and Sweden.

Tired of Swiping Right, Some Singles Try Slow Dating

Some people don’t like Tinder, Kari Paul reports:

Hinge saw its user base grow by more than 400% after redesigning the platform in 2017 to eliminate its swiping feature after learning 80% of its users had never found a long-term relationship on a dating app, according to Justin McLeod, Hinge’s CEO and co-founder. The changes were meant to foster more selectivity. Heterosexual men swipe right or “like” 70% of women on swiping apps but “like” just 20% on Hinge, he says.

“Some apps flatten people and objectify them, making them into a little card you can swipe through,” Mr. McLeod says. “Packaging people like fast-food items makes you forget there is a human on the other side of the app.”

OpenAI’s Dota 2 defeat is still a win for artificial intelligence

This story by my colleague James Vincent has only the faintest connection to social media, but it’s still my favorite piece of the day. It chronicles how a cutting-edge team of bots lost to a team of professional humans at a tournament of Dota 2, an insanely complicated video game known as a MOBA. If, like me, you’re subjected to daily messages from the social platforms about how AI will fix everything, this piece expertly illuminates the benefits and limits of state-of-the-art AI techniques such as reinforcement learning. And it’s also a very breezy read about humans eking out a victory over machines.

Launches

Facebook expands its Express Wi-Fi program for developing markets via hardware partnerships

Express Wi-Fi is a Facebook connectivity initiative that encourages local businesses to offer both free and paid high-speed internet access portals provided by Facebook’s partner carriers and ISPs. Today it announced a set of partnerships that will enable manufacturers to make hardware that is certified as compatible with Express Wi-Fi, Sarah Perez reports. The long-term game appears to be to put Facebook on the free tier of services offered by these businesses.

Oculus offers another classroom VR option with new pilot program

Oculus is giving launching “educational pilot programs” in Taiwan, Japan and Seattle, Mariella Moon reports, seeding Rift and Go headsets to libraries, museums, and schools.

Google Gboard can use selfies to create a ‘Mini’ version of you

Google rolls out the “Mini” its answer to Bitmoji, and they are … horrible! Ugly, washed-out, South Park-style avatars that if nothing else suggest the technology behind Snap’s Bitmoji product is more sophisticated than it appears.

Takes

Google is on the verge of making a huge mistake with China

“Chen Guangcheng, an activist who has been blind since childhood, was detained in 2005 for exposing forced sterilization of women to meet China’s one-child policy,” says the Post. “In 2012, he escaped from house arrest and was subsequently granted asylum in the United States.” Chen says Google should back off of its plans for China:

There is simply no way that Google can feign a neutral stance while developing a search platform designed to serve not the general public but a violent, coercive, authoritarian regime. Censorship, information blackouts and outright propaganda are prime tools in the CCP’s arsenal of control, as evidenced in incidents large and small, recent and historic. The ongoing crackdown on lawyers and human rights activists and the outrageous campaign against ethnic minorities in Xinjiang province — all well documented by numerous media outlets, NGOs, the United Nations and the U.S. government — are but two examples in a trove of evidence demonstrating the CCP’s intentions.

And then came news about Google’s work on a censored search engine (code-named “Dragonfly”). After my initial shock wore off, I found myself wondering what had occurred to cause the company to shed its defining principle in such a blatant fashion. Does Google really want to become a tool of the dictatorial communist regime? What about the millions of disappointed Chinese fans? Without their support, and without the company’s moral bearings, how would Google survive in China? Google — and all foreign companies — should remember: The vessel containing a dictatorship’s desire is boundless, never filled, never satisfied. You give an inch, and they will take a mile in irrational demands.

Don’t Pretend Facebook and Twitter’s CEOs Can’t Fix This Mess

Former Reddit CEO Ellen Pao says de-platforming makes social networks better:

Companies can address harassment without hurting their platforms. Taking down shitty content works, and research supports it. When we took down unauthorized nude photos and revenge porn, nothing bad happened. The site continued to function, and all the other major sites followed. A few months later, we banned the five most harassing subreddits. And we saw right away that if we kept taking down the replacement sites, they would eventually disappear. University researchers who studied the impact of the ban report that it successfully shut down the content and changed bad behavior over time on the site—without making other sites worse.

Facebook and Google’s plan for a new federal privacy law is really about protecting themselves

Will Oremus casts a skeptical eye on tech giants’ plans to write a federal data privacy law to supersede California’s:

Voluntary standards aren’t the solution; they’re the problem with companies that largely have been allowed to regulate themselves since their inception. And if that’s what the technology industry has in mind, then what it’s really pushing for isn’t a privacy law at all. It’s more like a law against privacy laws—a bulwark against state legislation, or future federal legislation, that carries serious penalties for violations.

And finally …

Mahmoud Ahmadinejad is many things: corrupt former president of Iran, conspiracy theorist, and nuclear weapons enthusiast. But it’s not just nukes he loves — he also loves Twitter, and he released the following tweet today to the global town square:

Autocrats: they’re just like us!

Talk to me

Send me tips, questions, comments, and requests for verification: casey@theverge.com.

Twitter’s fear of making hard decisions is killing it

Why does Twitter move so slowly?

It’s a question that has been on my mind since Monday, as we watched the company belatedly tiptoe into enforcement of its guidelines against inciting violence. It came up again Thursday, as we saw the company move — a staggering six years after first promising to do so — to significantly restrict the capabilities of third-party apps.

Nothing defines Twitter so thoroughly as its bias toward inaction. In February, Bloomberg’s Selina Wang diagnosed the problem in an article titled “Why Twitter can’t pull the trigger on new products.” Largely, Wang’s reporting laid the blame at the feet of CEO Jack Dorsey.

Dorsey’s leadership style fosters caution, according to about a dozen people who’ve worked with him. He encourages debate among his employees and waits — and waits — for a consensus to emerge. As a result, ideas are often debated “ad nauseum” and fail to come to fruition. “They need leadership that can make tough decisions and keep the ball rolling,” says a former employee who left last year. “There are a lot of times when Jack will instead wring his hands and punt on a decision that needs to be made quickly.”

This view closely tracks my own discussions with current and former employees. They’ve described for me the regular hack weeks that take place at Twitter, in which employees mock up a variety of useful new features, almost none of which ever ship in the core product.

It’s true that Twitter has fewer employees, and less money, than its rivals at Facebook. And even its recent glacial pace of development is arguably faster than it was under previous CEO Dick Costolo.

But time and again, Twitter’s move-slow-and-apologize ethos gets it into trouble. Today’s action against third-party apps illustrates the problem.

Once upon a time, Twitter let people build whatever kind of Twitter apps they wanted to. For a brief, shining time, Twitter was a design playground. Developers making Twitter apps invented new features, such as “pull to fresh” and account muting, that became industry standards.

Then, in 2012, Twitter reversed course. Under Costolo, the company decided that its future lay in Facebook-style feed advertising, which meant consolidating everything into a single native app it could control.

But rather than kill off third-party apps for good, it introduced a series of half-measures designed to bleed them out slowly: denying them new features, for example, or capping the number of users they could acquire by limiting their API tokens. While this spared some amount of yelling in the short term, the move — which was still hugely unpopular with a vocal segment of the user base — needlessly prolonged the agony.

Even after today’s action, third-party apps aren’t dead. They can no longer send push notifications, and their timelines will no longer refresh automatically — making them useless to people like me who, as a Tweetbot user, relies on a waterfall of tweets cascading down my screen each day to stay in touch with the day’s news. (As of today I am, God help me, a Tweetdeck user.)

The fate of the third-party apps is a relatively small concern for Twitter; the overwhelming majority of its user base uses the flagship app. They are going to die eventually, but Twitter refuses to kill them off once and for all. It’s a prime example of how the company, when presented with an obvious decision, goes out of its way to avoid making it.

That’s why I’ve been baffled this week by Dorsey’s media tour, in which he has sought to explain the company’s ambivalent approach to disciplining Alex Jones. Over the past week, Twitter found that Jones violated its rules eight times, then gave him a one-week suspension in which he could still read tweets and send direct messages.

Here is how Dorsey described that process to The Hill’s Harper Neidig:

“We’re always trying to cultivate more of a learning mindset and help guide people back towards healthier behaviors and healthier public conversation.“

“We also think it’s important to clarify what our principles are, which we haven’t done a great job of in the past, and we need to take a step back and make sure that we are clearly articulating what those mean and what our objectives are.”

Again, presented with an obvious decision, Twitter declines to make it. Then, even more surprisingly, it suggests the problem is that it hasn’t clearly articulated its own policies — when, in fact, it articulated perfectly clear policies online, to the point that CNN’s Oliver Darcy was able to use them to identify the very instances of rule-breaking that eventually got Jones into trouble.

On Wednesday, Jack Dorsey told the Washington Post that he is ”rethinking the core of how Twitter works.“ And yet the company’s history suggests that it hasn’t failed for lack of thinking. The problem, rather, is that thinking has so often served as a substitute for action.

Democracy

Google Employees Protest Secret Work on Censored Search Engine for China

Kate Conger and Daisuke Wakabayashi get their hands on a letter signed by 1,400 Googlers protesting the development of a censored search engine and news app. This is shaping up to be a major conflict. Google won’t comment — censorship is considered a state secret in China, so discussing it could scuttle the company’s plans — but as a result, these employees get to define the narrative with no pushback from Google itself.

“We urgently need more transparency, a seat at the table, and a commitment to clear and open processes: Google employees need to know what we’re building,” the letter said.

The letter also called on Google to allow employees to participate in ethical reviews of the company’s products, to appoint external representatives to ensure transparency and to publish an ethical assessment of controversial projects. The document referred to the situation as a “code yellow,” a process used in engineering to address critical problems that impact several teams.

Google Censorship Plan Is “Not Right” and “Stupid,” Says Former Google Head of Free Expression

Lokman Tsui, Google’s head of free expression for Asia and the Pacific from 2011 to 2014, takes a look at Google’s plans for a censored search engine. “This is just a really bad idea, a stupid, stupid move,” he tells Ryan Gallagher. “I feel compelled to speak out and say that this is not right.” Tsui goes on:

“In these past few years things have been deteriorating so badly in China – you cannot be there without compromising yourself,” Tsui said. Google launching a censored search engine in the country “would be a moral victory for Beijing,” he added. “Beijing has nothing to lose. So if Google wants to go back, it would be under the terms and conditions that Beijing would lay out for them. I can’t see how Google would be able to negotiate any kind of a deal that would be positive. I can’t see a way to operate Google search in China without violating widely held international human rights standards.”

Google Staff Tell Bosses China Censorship is “Moral and Ethical” Crisis

Gallagher also reports on an essay written by former Googler Brandon Downey, who worked on the original censored Google search engine:

“I want to say I’m sorry for helping to do this,” Downey wrote. “I don’t know how much this contributed to strengthening political support for the censorship regime in [China], but it was wrong. It did nothing but benefit me and my career, and so it fits the classic definition of morally heedless behavior: I got things and in return it probably made some other people’s life worse.”

“We have a responsibility to the world our technology enables,” Downey adds. “If we build a tool and give it to people who are hurting other people with it, it is our job to try to stop it, or at least, not help it. Technology can of course be a force for good, but it’s not a magic bullet – it’s more like a laser and it’s up to us what we focus it on. What we can’t do is just collaborate, and assume it will have a happy ending.”

Update on Myanmar

Late on Wednesday, following a dire report on its handling of ethnic conflict in Myanmar, Facebook posted an “update” on its work there. Players of talking-points bingo will find “we were slow to act,” “we’re hiring more people,” and “we have more work to do” all represented. But here’s something I didn’t know about Facebook’s problems in Myanmar — they’re exacerbated by a font display issue:

We’re also working to make it easier for people to report content in the first place. One of the biggest problems we face is the way text is displayed in Myanmar. Unicode is the global industry standard to encode and display fonts, including for Burmese and other local Myanmar languages. However, over 90% of phones in Myanmar use Zawgyi, which is only used to display Burmese. This means that someone with a Zawgyi phone can’t read websites, posts or Facebook Help Center instructions written in Unicode properly. Myanmar is switching to Unicode, and we’re helping by removing Zawgyi as an option for new Facebook users and improving font converters for existing ones. This will not affect people’s posts but it will standardize how they see buttons, Help Center instructions and reporting tools in the Facebook app.

New WordPress policy allows it to shut down blogs of Sandy Hook deniers

Amid criticism that the company was hosting several blogs that harassed the victims of Sandy Hook shootings, WordPress parent Automattic changed company policy on Thursday and began shutting down those blogs. Sarah Perez reports that WordPress policy now prohibits “malicious publication of unauthorized, identifying images of minors.”

WordPress policies were designed to be more resistant to the strategic use of copyright claims as a means of getting content removed. Longtime web veterans know they were written this way because they were created at a time when large corporations would wield copyright law – like the DMCA – as a weapon used to force platforms to take down content about their company that they deemed unfavorable.

But in recent years, the permissiveness these policies has also created loopholes for those whose spread disinformation, incite hatred and violence, and post abusive and offensive content to the web.

Austin pirate radio station that airs Alex Jones faces $15k fine

The latest entity to de-platform Alex Jones — besides WordPress — is the Federal Communications Commission, reports Gary Dinges. (It’s not clear what connection, if any, this station actually has to Jones.)

An Austin pirate radio station that airs controversial host Alex Jones has been knocked off the city’s airwaves – at least temporarily – and the Federal Communications Commission has levied a $15,000 penalty that the station’s operators are refusing to pay.

A lawsuit filed this week in U.S. District Court in Austin accuses Liberty Radio of operating at 90.1 FM without federal consent since at least 2013. Religious programming was airing on that frequency Wednesday, in place of Liberty Radio.

Why Facebook Enlisted This Research Lab to Track Its Trolls

Issie Lapowsky profiles the Atlantic Council’s Digital Forensics Research Lab, which is tasked with explaining the origins of misinformation online. Facebook is leaning heavily on the group as it works to understand the influence campaign that is now unfolding on the service:

But for Facebook, giving money away is the easy part. The challenge now is figuring out how best to leverage this new partnership. Facebook is a $500 billion tech juggernaut with 30,000 employees in offices around the world; it’s hard to imagine what a 14-person team at a non-profit could tell them that they don’t already know. But Facebook’s security team and DFRLab staff swap tips daily through a shared Slack channel, and Harbath says that Brookie’s team has already made some valuable discoveries.

Elsewhere

How Snap Is Becoming Twitter

Believe it or not, “Snap is the new Twitter” used to be considered something of a hot take. But the numbers don’t lie: it’s another company that vacillates between slow growth and outright decline, Tom Dotan reports:

For now, Snap’s ad revenue is growing quickly, as advertisers flock to what remains a relatively new platform. In the June quarter, the company’s revenue of $262 million was up 44% over the same period last year, blowing past analyst projections. But if it follows Twitter, Snap’s ad revenue growth will slow sharply next year.

A Mark Zuckerberg-backed nonprofit is helping separated migrant families

Silicon Valley immigration advocacy group FWD.us, which seems to have dramatically underperformed expectations, recently invested millions of dollars in reuniting separated families of migrants, Heather Kelly reports. Good for FWD.

The group spent two weeks in July in New Mexico, Texas, and Arizona booking flights for reunited parents and their children who were just out of federal custody. The multi-million dollar effort, called Flights for Families, required long hours on the phone booking some 1,300 tickets and attending to countless other details, such as lining up prepaid cell phones, connecting families with lawyers, and keeping the kids entertained.

What Am I Worth to Advertisers? My Obsessive Quest to Put a Price on My Attention

Bryan Menegus was served 319 online ads one Tuesday in July, costing advertisers about $2.69, he estimates.

Launches

Facebook cracks down on opioid dealers after years of neglect

Facebook is now suggesting resources to people who search for fentanyl and other opioids, as well as removing more drug dealers from search results, Josh Constine reports.

Takes

Facebook’s failure in Myanmar is the work of a blundering toddler

Olivia Solon is not impressed with Facebook’s recent statements about its work in Myanmar:

When the Guardian asked how the notoriously metrics-focused company would measure the success of the policy, the answer was characteristically mealy-mouthed: “Our goal is to get better at identifying and removing abuses of our platform that spread hate and can contribute to offline violence or harm, so people in Myanmar can safely enjoy the benefits of connectivity.”

When pushed again to specify how it would measure this, a spokeswoman said “that’s difficult”.

And finally …

An Ad Network That Helps Fake News Sites Earn Money Is Now Asking Users To Report Fake News

Revcontent makes one of those awful chum boxes that attach to the bottom of more reputable news stories enticing you to learn about one weird trick to cure belly fat, or 12 former child stars who now look terrible, or whatever. After Buzzfeed’s Craig Silverman asked them about various fake news stories contained in their chum boxes, Revcontent grudgingly removed a few of them — but not before denouncing Buzzfeed itself as fake news.

An ad network launched a new initiative to “continue the fight against fake news” at the same time it was working with 21 websites that have published fake news stories, according to a review conducted by BuzzFeed News.

When contacted for comment, Revcontent subsequently removed four of the sites from its network, and in a statement suggested that a previous BuzzFeed News story about ad networks on fake news sites could itself be considered “fake news.”

The story above is from 2017, and Revcontent lets me know that it is now working with an international fact-checking network. Progress!

Talk to me

Send me tips, questions, comments: casey@theverge.com.

Update, 10:37 a.m.: This story has been updated to note that Revcontent has begun working with a fact checker.

Twitter finally draws a line on extremism

On Friday, I wrote about Twitter’s seeming paralysis when it came to enforcing its platform rules. What, exactly, was going on over there? Late Friday evening, we got an answer of sorts. The company invited Cecilia Kang and Kate Conger of The New York Times to sit in on a meeting in which CEO Jack Dorsey and 18 of his colleagues debated safety policies. The meeting was rather… inconclusive, they report:

For about an hour, the group tried to get a handle on what constituted dehumanizing speech. At one point, Mr. Dorsey wondered if there was a technology solution. There was no agreement on an answer.

Elsewhere in the piece, executives sound other notes we’ve heard before from this and other platforms: Free speech is valuable. Moderation issues are difficult. User safety is important. Ultimately, Twitter seemed to double down on delayed action, agreeing “to draft a policy about dehumanizing speech and open it to the public for their comments.” (Is Twitter really lacking for public speech on this subject?)

Of course, policies are only meaningful insofar as they are enforced. Dorsey’s stated rationale for keeping Alex Jones and Infowar on Twitter is that Jones had not violated the site’s rules. CNN’s Oliver Darcy demolished that rationale with a single Twitter search.

Late Friday, Twitter copped to it, saying Jones had in fact violated its rules at least seven times. Five were posted before Twitter adopted more stringent behavior guidelines, but two of them were posted “recently enough that Twitter could cite them in the future to take additional punitive action against Jones’ accounts,” Darcy reported.

A seven-strikes-and-you’re-still-in approach to dehumanizing speech would seem to encourage more of it. Twitter’s shifting explanations, coupled with theatrical “transparency,” inspire little confidence. The company declines to enforce its rules, then invites journalists in to watch it agonize over the bind it’s gotten itself into. It feels absurd.

Surprisingly, the company later did draw a line, though not the one we expected. Ryan Mac and Blake Montgomery broke the news that Twitter had suspended several accounts associated with the Proud Boys, a right-wing group whose members attended last year’s Unite the Right rally in Charlottesville, ahead of this year’s gathering.

The group violated Twitter’s policies against “violent extremist groups,” Twitter said. BuzzFeed reported that members of the Proud Boys have attended several rallies that have turned violent. That included a recent one in Portland. So far, Facebook hasn’t followed suit — despite the fact that the Proud Boys do their primary recruiting there, according to this helpful piece from Taylor Hatmaker.

Meanwhile, the mother of a six-year-old Sandy Hook shooting victim says Alex Jones and Infowars continue to inspire threats against her. “If there are clear threatening actions and harassment that continues from Jones and Infowars, and then Twitter doesn’t take action, well yeah, people need to understand that there are consequences for actions as well as inactions,” Nicole Hockley, who is suing Jones, told Remy Smidt.

The consequences of inaction often seem to be the thing that Twitter understands the least.

Correction, August 14th: This article has been updated to reflect the fact that the Proud Boys were banned for Twitter for violent extremism, not hate speech. It has been further updated to reflect that while members attended the Unite the Right rally, the group itself did not have an official presence.

Democracy

Can Society Scale?

Jonah Engel Bromwich examines the story of a popular Facebook group, known as New Urbanist Memes for Transit Oriented Teens, which fractured into more than 100 splinter organizations (Social Urbanist Memes for Anarchist Communist Teens, Amchad Memes for American Rail Apologist Teens, etc.) amid political rancor. (The title question is not really answered to my satisfaction!)

“When everything was smaller, we all loved it more,” she said. Though she could not define an absolute threshold, she said that once a group gets beyond, 1,000, 2,000 or even 5,000 members, “things start getting chaotic.”

An 11-Year-Old Changed The Results Of Florida’s Presidential Vote At A Hacker Convention. Discuss.

Earlier this month I told you about the children who would attempt to hack our elections for good. Kevin Collier attended the event in Las Vegas this weekend. Gulp:

In a room set aside for kid hackers, an 11-year-old girl hacked a replica of the Florida secretary of state’s website within 10 minutes — and changed the results.

Russian Hackers Targeted Swedish News Sites In 2016, State Department Cable Says

It wasn’t just the United States that Russia went after in 2016. According to a State Department cable, the Swedish attack was part of a Russian campaign to sow disinformation about NATO, Kevin Collier and Jason Leopold report:

Sent Oct. 19, 2016, primarily to US ambassadors in Europe, it detailed US intelligence suspicions about Russian meddling in US the presidential election.

It also warned that Russia was engaged in a widespread campaign to destabilize NATO alliances that included not only a disinformation campaign but the crippling cyberattacks against Swedish news organizations, which knocked several of the country’s largest news organizations offline.

‘It’s our time to serve the Motherland’: How Russia’s war in Georgia sparked Moscow’s modern-day recruitment of criminal hackers

Meduza looks at how Russia’s 2008 war in Georgia led it to recruit hackers who would eventually attack the Democratic National Committee during the 2016 US elections.

Ruslan Stoyanov, the former head of Kaspersky Lab’s investigations department who’s worked extensively with the FSB, has warned openly that Russia is flirting with disaster by cooperating so closely with criminal hackers. “There’s an enormous temptation for the ‘decision makers’ to use Russian cybercrime’s ready-made solutions to influence geopolitics,” Stoyanov wrote in an open letter. He’s been in pretrial detention since January 2017, facing treason charges. “The most terrifying scenario is one where cyber-criminals are granted immunity from retaliation for stealing money in other countries in exchange for [hacked] intelligence. If this happens, a whole class of ‘patriotic thieves’ will emerge, and semi-legal ‘patriot groups’ can invest their stolen capital fаr more openly in the creation of more sophisticated Trojan programs, and Russia will end up with the most advanced cyber-weapons.”

Meduza’s sources say the Russian authorities have been relying on intelligence gathered by these “patriotic groups” for at least a decade.

Vimeo is the latest platform to remove content from InfoWars conspiracy theorist Alex Jones

Jones had turned to Vimeo after getting kicked off YouTube, uploading more than 50 videos on Thursday and Friday.

Elsewhere

Online activists hit hatemongers like Alex Jones where it hurts the most — in the wallet

Margaret Sullivan profiles Sleeping Giants, a San Francisco-based Twitter account that tries to shame advertisers into abandoning controversial programming. This playbook is the new normal, Sullivan writes:

it’s not hard to imagine similar techniques being used in ways that hurt media organizations or personalities who have done nothing worse than be provocative, as was the case with Gawker.

In an era where bad faith rules the day in so many realms, the techniques used by Sleeping Giants are both powerful and potentially dangerous.

Facebook’s message to media: “We are not interested in talking to you about your traffic…That is the old world and there is no going back”

A Kinsley gaffe occurs when a politician tells a truth she wasn’t meant to say. Campbell Brown, Facebook’s head of news partnerships, may or may not have done that recently in Australia — she denies saying the exact quotes attributed to her here — but the message was is enough. Facebook really isn’t turning on its traffic firehose again.

Facebook buys Vidpresso’s team and tech to make video interactive

Vidpresso “works with TV broadcasters and content publishers to make their online videos more interactive with on-screen social media polling and comments, graphics, and live broadcasting integrated with Facebook, YouTube, Periscope, and more,” Josh Constine reports.

Back-to-school shopping for kids involves Amazon wishlists and Snapchat filters

Nearly half of 10- to 12-year-olds have their own smartphones, and marketers are finding them at ever-younger ages:

“Snapchat and YouTube have become a way for brands to market right to tweens — in fact, it’s one of the only ways to get to them directly,” said Gregg L. Witt, executive vice president of youth marketing for Motivate, an advertising firm in San Diego. “If you’re trying to target a specific demographic, TV no longer works. You’re going to mobile, digital, social media.”

Launches

Twitter Lite in the Google Play Store: now available in 45+ countries

Twitter Lite is now available in the Google Play Store in more than 45 countries around the world. It’s everything you love about Twitter, except it minimizes Nazis. I’m sorry, did I say Nazis? I meant data usage.

Takes

Twitter and Facebook Are Platforms, Not Publishers

Jeff Jarvis, whose work is funded in part by Facebook grants, says recent media coverage of social networks reflects an incipient “moral panic” and that a small number of malignant trolls on the platforms simply represent “the messy sound of democracy.” Jarvis has long been useful to the platforms because he is a former journalist (and TV Guide Cheers ‘n’ Jeers columnist) who tends to blame the media first. Anyway, here is a take that takes them off the hook so that the media can take the blame for society’s ills:

Those of us in media must acknowledge our responsibility for the messes we’ve made. Long before the net, media played a key role in polarizing the nation into red versus blue, black versus white, 99 percent versus 1 percent. CNN earned its money in conflict rather than resolution. Fox News has done more damage to American democracy than the internet. It was the media’s primary business model, built on volume and attention, that led to the clickbait that is the ruin of the net. Media and platforms as well as advertisers need to work together to build new business models based on value, on relationships, on accomplishment, on quality, on openness.

And finally…

Kevin Roose had me at “Mark Zuckerberg protest song.” The video is helpfully captioned so you don’t even have to listen to the words or the music.

Talk to me

Send me tips, comments, questions, or dehumanizing speech policies: casey@theverge.com

Twitter won’t punish Alex Jones for his past Twitter behavior

Last night, amid growing pressure to address Alex Jones’ presence on Twitter, CEO Jack Dorsey tried to explain the company’s position. In a series of five tweets, he made the following case:

  • Jones hasn’t violated Twitter’s rules.
  • Twitter won’t ban someone just because other platforms did.
  • Journalists should “document, validate, and refute” the “unsubstantiated rumors” that “accounts like Jones’” spread.

He then linked to a somewhat confounding new blog post, “The Twitter Rules: A Living Document,” that does little more than say that the Twitter rules are a living document. It was confounding in that no one had accused the Twitter rules of being a dead document, only a weak and erratically enforced one, in any case it seemed to bear little relation to Dorsey’s tweets, even though they were published simultaneously.

Publicly, the tweet storm generated more than 20,000 replies in its first 24 hours. It seemed like a lot for Twitter to say on a subject where it had taken no action, especially given that inaction is the company’s default operating mode on policy issues. It generated a flood of negative commentary from journalists, who took exception the idea that they should serve in an unpaid role as Twitter’s unofficial moderators; and from current and former employees, who were put off by Dorsey’s muddy reasoning.

Emily Horne, until recently Twitter’s head of policy communications, called out Dorsey’s apparent undermining of its messaging. “Truth is we’ve been terrible at explaining our decisions in the past. We’re fixing that,” he tweeted. To which Horne responded: “Please don’t blame the current state of play on communications. These decisions aren’t easy, but they aren’t comms calls and it’s unhelpful to denigrate your colleagues whose credibility will help explain them.

Dorsey replied that he wasn’t blaming the team, although he has before. In November, Twitter was dealing with a series of tweets containing graphic anti-Muslim videos. The tweets were posted by Britain First, a far-right fringe group, and were later retweeted by President Donald Trump. Twitter decided to let the tweets stand because, while they may have violated its rules, they were newsworthy.

A day later, the company reversed course, saying the tweets were allowed because — like Jones — they didn’t violate its rules after all. “We mistakenly pointed to the wrong reason we didn’t take action on the videos from earlier this week,” Dorsey tweeted.

In both cases, Dorsey framed Twitter’s struggles to address hate speech on its platform not as issues of policy but of communication. And yet, as one former Twitter employee noted to me yesterday, Dorsey’s tweetstorm itself communicated no real information about Jones or company policy. The fact that he hadn’t been found in violation of its rules was apparent by the fact that his account remained active.

At the same time, the thread introduced at least three new problems. One, by having its CEO discuss individual accounts publicly, Twitter encouraged the idea that account banning is subject to a single person’s whims. Two, it publicly undermined Twitter’s communications team for the second time in a year. Three, it inexplicably passed the buck for enforcing its policies to journalists.

None of that came up Wednesday afternoon, when Dorsey — turning his attention to the conservative fantasy that Republicans are being ”shadow banned“ from the service — went on Sean Hannity’s radio show. As Dorsey calmly explained timeline ranking, Hannity praised Dorsey lavishly for his inaction over Jones. No one learned anything.

For that we had to wait until Wednesday afternoon, when Charlie Warzel posted an internal memo from Twitter safety chief Del Harvey. (Harvey later tweeted it herself.) Harvey was also the author of the “living document” blog post, but the memo, despite being of basically equal length, contained vastly more information.

The reason Jones hasn’t been banned, she said, is that while his past actions violate Twitter’s current rules, they didn’t violate its rules at the time. And so he can stay, assuming he doesn’t violate the living document. Harvey went on to say that further changes to the rules could affect Jones, including a new policy about speech that dehumanizes others.

Regarding what she called “the dehumanization policy,” Harvey said Twitter would review it this week; and when it came to bad behavior away from Twitter, the company has “a goal of having a recommendation for a path forward for staff review by mid-September” — an almost comically non-committal commitment. The down side of Twitter’s commitment to transparency is that sometimes, you can see right through them.

Meanwhile Apple said it hadn’t banned the Infowars app because it hadn’t caught any bad behavior on its live streams yet. It was the No. 1 trending app in the Google Play Store.

Democracy

With Alex Jones, Facebook’s Worst Demons Abroad Begin to Come Home

Max Fisher examines the Jones case in relation to Amith Weerasinghe, a Sri Lankan extremist who used Facebook to promote anti-Muslim views; and Ashin Wirathu, whose hate speech inspired riots in Myanmar in 2014. “Developing countries’ experiences with Facebook suggest that the company, however noble its intent, has set in motion a series of problems we are only beginning to understand and that the company has proved unable or unwilling to fully address,” he writes. Fisher continues:

The platform has grown so powerful, so quickly, that we are still struggling to understand its influence. Social scientists regularly discover new ways that Facebook alters the societies where it operates: a link to hate crimes, a rise in extremism, a distortion of social norms.

Inside Google’s Effort to Develop a Censored Search Engine in China

Ryan Gallagher, who has done an outstanding job unearthing information about Google’s secret China plans, finds that the company is using a website it owns called 265.com to gather information about which terms it would have to censor if it were granted a license to get back into China:

It appears that Google has used 265.com as a de facto honeypot for market research, storing information about Chinese users’ searches before sending them along to Baidu. Google’s use of 265.com offers an insight into the mechanics behind its planned Chinese censored search platform, code-named Dragonfly, which the company has been preparing since spring 2017.

After gathering sample queries from 265.com, Google engineers used them to review lists of websites that people would see in response to their searches. The Dragonfly developers used a tool they called “BeaconTower” to check whether the websites were blocked by the Great Firewall. They compiled a list of thousands of websites that were banned, and then integrated this information into a censored version of Google’s search engine so that it would automatically manipulate Google results, purging links to websites prohibited in China from the first page shown to users.

YouTube Is Now Fact-Checking Videos About Climate Change

Last month I wrote about YouTube’s “information cues” — the cards that appear underneath some conspiracy content to direct users to more accurate information from Wikipedia and other sources. Zahra Hirji reports that climate change videos are the latest category of YouTube videos to get the cues. Conspiracists seem to be taking it well:

“Despite claiming to be a public forum and a platform open to all, YouTube is clearly a left-wing organization,” Craig Strazzeri, PragerU’s chief marketing officer, said by email. “This is just another mistake in a long line of giant missteps that erodes America’s trust in Big Tech, much like what has already happened with the mainstream news media.”

Subpoena for app called ‘Discord’ could unmask identities of Charlottesville white supremacists

A magistrate has ruled that gaming chat app Discord must disclose the identities of the people who used the service to organize last year’s deadly Unite the Right rally in Charlottesville. Meagan Flynn reports:

In his 28-page ruling, Spero appeared to acknowledge why Discord might have been so appealing to members of the so-called alt-right in the first place: “It is clear that many members of the ‘alt-right’ feel free to speak online in part because of their ability to hide behind an anonymous username,” he wrote.

Discord’s popularity among members of the alt-right ahead of the Charlottesville rally surfaced largely after hundreds of their messages were leaked to a media collective known as “Unicorn Riot.”

Elsewhere

Amazon seems to have quietly stopped recommending Alex Jones products

The campaign of public pressure against Alex Jones has scored another win:

Amazon wasn’t just selling the products – it was sort of recommending them. A Politico story from this morning noted that many Jones products had the “Amazon’s Choice” logo on it, which is an internal stamp of approval for certain items on the platform. Now, it seems Amazon has taken the “Choice” designation away from Jones’s products.

Who are QAnon supporters? The QAnon subreddit, analyzed with data.

The much-discussed QAnon conspiracy is the product of relatively few Redditors, according to this analysis by Alvin Chang:

About 200 users account for a quarter of the forum’s comments. These people are clearly conspiracy theorists who believe they are investigators unearthing the truth, and they spend almost all their time on Reddit investigating these theories.

Another 700 users account for the next quarter of comments. The user we followed at the top of this story is among these people. They are active in /r/greatawakening but also spend time on other subreddits. Nearly everyone else in the subreddit — the 11,000 commenters and 42,000 lurkers — are just along for the ride.

Facebook’s Teen App Used A “Psychological Trick” To Attract High School Downloads

Ryan Mac gets his hands on a growth memo from TBH, a polling app that was briefly popular last year it. The tactics it describes were so successful that Facebook shut TBH down less than a year after acquiring it and moved its employees to work on other things.

For women on Twitch, appearing single has become a minefield

Women broadcasters have on Twitch face a hurdle their male counterparts don’t, my colleague Patricia Hernandez reports:

Many women on Twitch say that being on the platform means navigating complicated expectations from viewers, especially when it comes to relationships. Some viewers expect romantic availability from women streamers, or demand to know their relationship status before investing in them. Other times, viewers can cross a line and become possessive or entitled toward the women they watch on Twitch. Streamers, in turn, have to make tough decisions about how they present to their audience, and how much they decide to share when both concealing or disclosing their romantic relationship can come with a cost.

Launches

Magic Leap One Creator Edition preview: a flawed glimpse of mixed reality’s amazing potential

The $2,295 Magic Leap One Creator Edition has arrived, and it looks like a dud.

New facial recognition tool tracks targets across social networks

My colleague Russell Brandon writes about a new tool that takes a picture and crawls social-network accounts to find matches using facial recognition. It’s designed for good, but could be used for ill, he reports:

The end result is a spreadsheet of confirmed accounts for each name, perfect for targeted phishing campaigns or general intelligence gathering. Trustwave’s emphasis is on ethical hacking — using phishing techniques to highlight vulnerabilities that can then be fixed — but there are few restrictions on who can use the program. Social Mapper is licensed as free software, and it’s freely available on GitHub.

Ethical OS Helps Tech Startups Avert Moral Disasters

The Institute of the Future, a Palo Alto-based think tank, and the Tech and Society Solutions Lab, a Pierre Omidyar initiative, teamed up to create an ethics guidebook for tech companies, Arielle Pardis reports. Among other things, the guide encourages companies to examine their risk for disinformationa and propaganda.

Takes

Twitter is not your friend. The Sarah Jeong saga shows us why.

Ezra Klein says that Twitter takes in-group conversations, makes them public, draws the outrage of people who lack all relevant context, and makes society worse as a result:

If you’re a conservative, the liberal tweets that get shot into your sightline aren’t the most thoughtful or representative missives; they’re the ones designed to make you think liberals hate you, are idiots, or both. The same is true if you’re a liberal: you see the worst of the right, not the best. And after you’ve seen enough of these kinds of comments from the other side, you begin to think that’s who they are, that you’re getting a true picture of what your opponents are really like, and what they really think of you — but it’s not a true picture, it’s a distortion built to deepen your attachment to your friends, your resentment of your opponents, and your engagement on the platform. And it’s one that plays on our tendencies to read the other side with much less generosity than we read our own side.

The very first tweet was sent in 2006. This is a young medium, and over time, we’ll (hopefully) figure it out — how to interpret it, how to couch it, how to delete old tweets automatically. But for now, the lesson is clear: #NeverTweet.

Shira Ovide wryly notes that Snap is reversing course on many of the things that made its business distinctive. (This is probably a good thing, at least from an investor standpoint.)

Now, though, Snapchat is borrowing liberally from the internet conventions it has scorned. Snapchat is — irony alert — copying Facebook by refashioning its advertising business for companies that want quick payoffs from their ads. It’s tracking people to prove those messages worked. And Snapchat loosened demands for tailor-made video programs, which makes it more like the rest of the web.

And finally …

Facebook Added Balloons and Confetti to Posts About the Earthquake in Indonesia

Let’s check in real quick with the state of our artificial intelligience:

Selamat in Indonesian can be translated as both “safe” or “unhurt,” as well as “congratulations,” depending on the context. Because of this, Facebook’s algorithm misinterpreted comments expressing concern for the safety of people in Indonesia as messages of congratulations, triggering a festive animations of balloons and confetti to play whenever someone commented using the word.

“This feature (a text animation triggered by typing ‘congrats’) is widely available on Facebook globally, however we regret that it appeared in this unfortunate context and have since turned off the feature locally,” Lisa Stratton, a Facebook spokesperson, told Motherboard in an email. “Our hearts go out to the people affected by the earthquake.”

Fake balloons are not your friend!

Talk to me

Send me tips, comments, questions, @jack replies: casey@theverge.com

Why Twitter should ignore the phony outrage over “shadow banning”

A consequence of covering the intersection of social media and democracy is that sometimes you wind up having to discuss things that are very dumb. The somewhat infuriating controversy over Twitter’s “shadow banning” of prominent conservatives — something that it is in no way doing — is one of them. And yet how Twitter reacts to the attendant criticism could determine whether the company ever gets a handle on the abuse its platform is so well known for.

Yesterday I mentioned a misleading story in Vice whose headline then stated, falsely, that “Twitter is ‘shadow banning’ prominent Republicans like the RNC chair and Trump Jr.’s spokesman.” (It has since been changed.) That story drew from a Sunday piece by Gizmodo that described how some “controversial” accounts were being “demoted in search results.”

The first thing to note about this story is that it begins and ends with which accounts are suggested when one begins typing in a name in the Twitter search box. That’s it. The very worst thing Twitter can be accused of here is, in some cases, making you spell out some people’s full names if you wanted to read their tweets.

Like I said: very dumb.

This story starts last year, when Twitter belatedly began to remove low-quality and harassing tweets from search results. Twitter continued to refine search results based on many different signals, most of which it has not and will never share with us. One effect of this was that the company sometimes would not automatically suggest certain accounts to you, based on the behavior of that account and the accounts that interacted with it.

The operating theory here is that you can tell a lot about a Twitter account from its friends, and if it hangs out with filth, it might not deserve a place in your personalized autocomplete results. Twitter, of course, has been under enormous pressure to make changes like these, following a decade in which it struggled to get abuse under control.

Wired’s Issie Lapowsky talked to Twitter about what might have happened to demote a handful of conservatives in search results:

Twitter has been far from transparent in defining that bad behavior, but a few examples it’s given publicly include accounts that haven’t confirmed their email addresses or that signed up several accounts at once. Twitter doesn’t ban these accounts or the tweets they post. It instead reduces their visibility in users’ replies and also in search. The company’s algorithms also analyze who those accounts are connected to and whether accounts in those networks are also exhibiting troll-like behavior. But Twitter insists the algorithms have no way of knowing whether the people behind those tweets are Republicans or Democrats.

“If you send a tweet and 45 accounts we think are really trolly are all replying a hundred times, and you’re retweeting a hundred of them, we’re not looking at that and saying, ‘This is a political viewpoint.’ We’re looking at the behavior surrounding the tweet,” the spokesperson said.

Meanwhile, President Donald Trump was rage-tweeting about the Vice story. “Twitter ‘SHADOW BANNING’ prominent Republicans,” he said. “Not good. We will look into this discriminatory and illegal practice at once! Many complaints.” (For the record, it is not illegal to make someone spell out a person’s complete name to find their account.)

One negative consequence of today’s news cycle is to strip the word “shadowban” of its meaning — another once-useful piece of platform jargon, like “fake news,” that now barely coheres as an idea. As Brian Feldman recounts in New York, a shadowban — which allows a user to continue posting on a forum, without informing them that their posts are now visible only to them — “is useful in that rather than someone immediately being locked out and possibly retaliating by, for example, making a new account, the user fades away gradually due to the lack of interaction from other users.”

Thanks to the events of this week, though, “shadowban” seems fated to mean “getting less distribution than I personally think it should.” That was the idea that motivated this year’s disastrous Diamond and Silk hearing in Congress, and Twitter’s search hiccup could lead to yet another one.

But let’s be clear: the argument that Twitter has systematically disadvantaged conservative voices can only be made in bad faith. Just as the argument that Facebook systematically disadvantages conservative voices can only be made in bad faith.

Just because it was made in bad faith, however, doesn’t mean it can’t be effective. That was one of the lessons of Facebook’s last two years, in which a Gizmodo story that argued Facebook was “suppressing conservative news” led to the company eliminating human editors from its platform. That helped pave the way for the spread of misinformation on the platform that continues today.

And that’s why a very dumb story about Twitter search results is more consequential that it might first appear. Platforms that find themselves at odds with the president of the United States will be highly prone to overreaction — in ways that make the platform worse. The work Twitter has undertaken to reduce abuse has been welcome. And if it means typing a few more characters into a search box, I say so be it.

“We do not shadow ban,” Twitter said in a statement late Thursday night. “You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile). And we certainly don’t shadow ban based on political viewpoints or ideology.”

Update, 11:40 a.m.: This article has been updated to include Twitter’s statement.

Democracy

Twitter Isn’t Shadow Banning Republicans. Here’s Why.

Feldman’s whole explainer about the history of shadowbanning here is useful, and I suggest reading the whole thing.

News Use Across Social Media Platforms 2017

Here are some new stats from Pew on how Americans get their news from social platforms:

Since 2013, at least half of Twitter users have reported getting news on the site, but in 2017, with a president who frequentlymakes announcements on the platform, that share has increased to about three-quarters (74%), up 15 percentage points from last year. On YouTube, about a third of users now get news there (32%), up from 21% in 2016. And news use among Snapchat’s user base increased 12 percentage points to 29% in August 2017, up from 17% in early 2016.

Elsewhere

Facebook’s stock market decline is the largest one-day drop in US history

Facebook’s collapse in after-hours trading yesterday continued on Thursday, to record effect:

After a surprisingly weak growth forecast in this week’s earnings report, Facebook’s stock price dropped 19 percent today. The decline, which erased about $120 billion in market value, is the largest one-day drop in the history of the American stock market.

The Conference Call That Shook Investor Faith in Facebook

Here’s a fun Deepa Seetharaman that goes minute by minute through the conference call resulting in Facebook’s record stock plunge. Here’s something that hasn’t gotten enough attention yet. (It will though.)

He partly blamed “currency headwinds” and new privacy options for users but also revealed that new ad formats such as those within Instagram Stories weren’t pulling in the same amount of money as ads shown in the Facebook and Instagram feeds.

This revelation about Stories startled many analysts and investors. Facebook executives have said users were embracing the Stories feature, which allows users to post photo and video montages that disappear after 24 hours, and that activity would eclipse time spent just scrolling through feeds next year. Ads shown in feeds are where Facebook generates the bulk of its revenue. Now, Facebook executives were saying that people were spending more time using a less-lucrative product.

Facebook Just Learned the True Cost of Fixing Its Problems

Here’s a good nugget from Fred Vogelstein on Facebook’s earnings:

How much less profitable will Facebook be? During the last quarter of 2017, the ratio of operating earnings to revenue—an important measure of profitability—was 57 percent. It was 44 percent in the second quarter of 2018. And it’s expected to fall into the mid-30 percent range by the end of this year, said David Wehner, Facebook’s chief financial officer.

Facebook has acquired Redkix to build better messaging features into its Slack competitor

Facebook is looking to beef up Workplace with some run-of-the-mill collaboration tools. Redkix is “an email startup that combines email, messaging and calendar features into one app,” Kurt Wagner reports.

Slack buys Hipchat with plans to shut it down and migrate users to its chat service

Slack also bought a Slack competitor — Hipchat, which predated it, and which Slack utterly routed it, to the point that Atlassian paid its rival (in the form of an investment) to be rid of it.

Teens Debate Big Issues on Instagram Flop Accounts

“Flop accounts” on Instagram are where teens are discussing real-world events, because they no longer trust the mainstream media, reports Taylor Lorenz:

The accounts post photos, videos, or screenshots of articles, memes, things, or people considered a “flop,” or, essentially, a fail. A flop could be a famous YouTuber saying something racist, someone being rude or awful in person, a homophobic comment, or anything that the teen who posted it deems wrong or unacceptable. Sometimes the teens who run a given account know each other in real life; more likely, they met online.

“Flop accounts bring attention to bad things or bad people that people should be aware of. We also post cringeworthy content for entertainment purposes,” said Alma, a 13-year-old admin on the flop account @nonstopflops.

When a Stranger Decides to Destroy Your Life

Here’s a memorably disturbing tale from Kashmir Hill about a woman who was defamed by a disturbed meth addict after making a comment online about a teen’s right to take a selfie at Auschwitz (?????).

Launches

Voice Messaging on LinkedIn: Giving You More Ways to Have Conversations

I’m sorry but this is the funniest thing I have ever read. Never change, LinkedIn. Unless you’re adding voicemail in 2018 in which case do exactly that.

Have you ever typed out a long message and thought about how much faster and easier it would be to say it out loud? To give you more ways to have conversations, we’ve now added the ability to record and send voice messages up to one minute in LinkedIn Messaging.

Snapchat “Storytellers” finally pairs creators with advertisers

Snap has belatedly introduced a way for influencers to make money on Snapchat.

IGTV carousel funnels Instagram feed traffic to buried videos

Are you completely ignoring IGTV? No worries, it will now be inserted into your Instagram feed as an enticement to click.

Takes

Facebook Had an Impressively Bad Day

Matt Levine is wonderfully droll on Facebook’s record stock decline:

There is a popular, slightly tongue-in-cheek notion in the financial industry that losing a billion dollars of client money is a badge of honor, something to be proud of, a good thing to have on your resume. It shows you can bounce back from adversity, etc., but more importantly it shows that clients trusted you with a billion dollars and you had the confidence to take risks with it. Losing $150 billion of shareholder money shows, at a minimum, that Facebook had shareholders who believed in it to the tune of almost $630 billion (as of yesterday’s close); you can’t lose $150 billion of market cap without first having $150 billion of market cap. I submit that losing $150 billion of market cap in a day is a more impressive financial accomplishment than anything that almost any other company has done in the history of stock markets.

And finally …

How the aerial tramway was saved from being Twitter’s least popular emoji

The hottest trend among teens is spamming Twitter with the least-used emoji so that it is … no longer the least-used emoji. My colleague Shoshana Wodinsky reports:

During the latter bit of those eleven weeks, the tramway became a bit of a hero among a number of public transportation advocates. The campaign to save the tramway emoji from its ignominious fate really took off when the 100,000 members of New Urbanist Memes for Transit-Oriented Teens — a Facebook group dedicated to transportation memes — caught wind of the situation. Dozens of NUMTOTs, as the members call themselves, spammed Twitter with strings of the lone gondola to try to bring it up from its spot in last place.

God bless NUMTOTs and thank you for all that you do.

Talk to me

Questions? Comments? Shadowbans? casey@theverge.com