Last night, amid growing pressure to address Alex Jones’ presence on Twitter, CEO Jack Dorsey tried to explain the company’s position. In a series of five tweets, he made the following case:
- Jones hasn’t violated Twitter’s rules.
- Twitter won’t ban someone just because other platforms did.
- Journalists should “document, validate, and refute” the “unsubstantiated rumors” that “accounts like Jones’” spread.
He then linked to a somewhat confounding new blog post, “The Twitter Rules: A Living Document,” that does little more than say that the Twitter rules are a living document. It was confounding in that no one had accused the Twitter rules of being a dead document, only a weak and erratically enforced one, in any case it seemed to bear little relation to Dorsey’s tweets, even though they were published simultaneously.
Publicly, the tweet storm generated more than 20,000 replies in its first 24 hours. It seemed like a lot for Twitter to say on a subject where it had taken no action, especially given that inaction is the company’s default operating mode on policy issues. It generated a flood of negative commentary from journalists, who took exception the idea that they should serve in an unpaid role as Twitter’s unofficial moderators; and from current and former employees, who were put off by Dorsey’s muddy reasoning.
Emily Horne, until recently Twitter’s head of policy communications, called out Dorsey’s apparent undermining of its messaging. “Truth is we’ve been terrible at explaining our decisions in the past. We’re fixing that,” he tweeted. To which Horne responded: “Please don’t blame the current state of play on communications. These decisions aren’t easy, but they aren’t comms calls and it’s unhelpful to denigrate your colleagues whose credibility will help explain them.
Dorsey replied that he wasn’t blaming the team, although he has before. In November, Twitter was dealing with a series of tweets containing graphic anti-Muslim videos. The tweets were posted by Britain First, a far-right fringe group, and were later retweeted by President Donald Trump. Twitter decided to let the tweets stand because, while they may have violated its rules, they were newsworthy.
A day later, the company reversed course, saying the tweets were allowed because — like Jones — they didn’t violate its rules after all. “We mistakenly pointed to the wrong reason we didn’t take action on the videos from earlier this week,” Dorsey tweeted.
In both cases, Dorsey framed Twitter’s struggles to address hate speech on its platform not as issues of policy but of communication. And yet, as one former Twitter employee noted to me yesterday, Dorsey’s tweetstorm itself communicated no real information about Jones or company policy. The fact that he hadn’t been found in violation of its rules was apparent by the fact that his account remained active.
At the same time, the thread introduced at least three new problems. One, by having its CEO discuss individual accounts publicly, Twitter encouraged the idea that account banning is subject to a single person’s whims. Two, it publicly undermined Twitter’s communications team for the second time in a year. Three, it inexplicably passed the buck for enforcing its policies to journalists.
None of that came up Wednesday afternoon, when Dorsey — turning his attention to the conservative fantasy that Republicans are being ”shadow banned“ from the service — went on Sean Hannity’s radio show. As Dorsey calmly explained timeline ranking, Hannity praised Dorsey lavishly for his inaction over Jones. No one learned anything.
For that we had to wait until Wednesday afternoon, when Charlie Warzel posted an internal memo from Twitter safety chief Del Harvey. (Harvey later tweeted it herself.) Harvey was also the author of the “living document” blog post, but the memo, despite being of basically equal length, contained vastly more information.
The reason Jones hasn’t been banned, she said, is that while his past actions violate Twitter’s current rules, they didn’t violate its rules at the time. And so he can stay, assuming he doesn’t violate the living document. Harvey went on to say that further changes to the rules could affect Jones, including a new policy about speech that dehumanizes others.
Regarding what she called “the dehumanization policy,” Harvey said Twitter would review it this week; and when it came to bad behavior away from Twitter, the company has “a goal of having a recommendation for a path forward for staff review by mid-September” — an almost comically non-committal commitment. The down side of Twitter’s commitment to transparency is that sometimes, you can see right through them.
Meanwhile Apple said it hadn’t banned the Infowars app because it hadn’t caught any bad behavior on its live streams yet. It was the No. 1 trending app in the Google Play Store.
Max Fisher examines the Jones case in relation to Amith Weerasinghe, a Sri Lankan extremist who used Facebook to promote anti-Muslim views; and Ashin Wirathu, whose hate speech inspired riots in Myanmar in 2014. “Developing countries’ experiences with Facebook suggest that the company, however noble its intent, has set in motion a series of problems we are only beginning to understand and that the company has proved unable or unwilling to fully address,” he writes. Fisher continues:
The platform has grown so powerful, so quickly, that we are still struggling to understand its influence. Social scientists regularly discover new ways that Facebook alters the societies where it operates: a link to hate crimes, a rise in extremism, a distortion of social norms.
Ryan Gallagher, who has done an outstanding job unearthing information about Google’s secret China plans, finds that the company is using a website it owns called 265.com to gather information about which terms it would have to censor if it were granted a license to get back into China:
It appears that Google has used 265.com as a de facto honeypot for market research, storing information about Chinese users’ searches before sending them along to Baidu. Google’s use of 265.com offers an insight into the mechanics behind its planned Chinese censored search platform, code-named Dragonfly, which the company has been preparing since spring 2017.
After gathering sample queries from 265.com, Google engineers used them to review lists of websites that people would see in response to their searches. The Dragonfly developers used a tool they called “BeaconTower” to check whether the websites were blocked by the Great Firewall. They compiled a list of thousands of websites that were banned, and then integrated this information into a censored version of Google’s search engine so that it would automatically manipulate Google results, purging links to websites prohibited in China from the first page shown to users.
Last month I wrote about YouTube’s “information cues” — the cards that appear underneath some conspiracy content to direct users to more accurate information from Wikipedia and other sources. Zahra Hirji reports that climate change videos are the latest category of YouTube videos to get the cues. Conspiracists seem to be taking it well:
“Despite claiming to be a public forum and a platform open to all, YouTube is clearly a left-wing organization,” Craig Strazzeri, PragerU’s chief marketing officer, said by email. “This is just another mistake in a long line of giant missteps that erodes America’s trust in Big Tech, much like what has already happened with the mainstream news media.”
A magistrate has ruled that gaming chat app Discord must disclose the identities of the people who used the service to organize last year’s deadly Unite the Right rally in Charlottesville. Meagan Flynn reports:
In his 28-page ruling, Spero appeared to acknowledge why Discord might have been so appealing to members of the so-called alt-right in the first place: “It is clear that many members of the ‘alt-right’ feel free to speak online in part because of their ability to hide behind an anonymous username,” he wrote.
Discord’s popularity among members of the alt-right ahead of the Charlottesville rally surfaced largely after hundreds of their messages were leaked to a media collective known as “Unicorn Riot.”
The campaign of public pressure against Alex Jones has scored another win:
Amazon wasn’t just selling the products – it was sort of recommending them. A Politico story from this morning noted that many Jones products had the “Amazon’s Choice” logo on it, which is an internal stamp of approval for certain items on the platform. Now, it seems Amazon has taken the “Choice” designation away from Jones’s products.
The much-discussed QAnon conspiracy is the product of relatively few Redditors, according to this analysis by Alvin Chang:
About 200 users account for a quarter of the forum’s comments. These people are clearly conspiracy theorists who believe they are investigators unearthing the truth, and they spend almost all their time on Reddit investigating these theories.
Another 700 users account for the next quarter of comments. The user we followed at the top of this story is among these people. They are active in /r/greatawakening but also spend time on other subreddits. Nearly everyone else in the subreddit — the 11,000 commenters and 42,000 lurkers — are just along for the ride.
Ryan Mac gets his hands on a growth memo from TBH, a polling app that was briefly popular last year it. The tactics it describes were so successful that Facebook shut TBH down less than a year after acquiring it and moved its employees to work on other things.
Women broadcasters have on Twitch face a hurdle their male counterparts don’t, my colleague Patricia Hernandez reports:
Many women on Twitch say that being on the platform means navigating complicated expectations from viewers, especially when it comes to relationships. Some viewers expect romantic availability from women streamers, or demand to know their relationship status before investing in them. Other times, viewers can cross a line and become possessive or entitled toward the women they watch on Twitch. Streamers, in turn, have to make tough decisions about how they present to their audience, and how much they decide to share when both concealing or disclosing their romantic relationship can come with a cost.
The $2,295 Magic Leap One Creator Edition has arrived, and it looks like a dud.
My colleague Russell Brandon writes about a new tool that takes a picture and crawls social-network accounts to find matches using facial recognition. It’s designed for good, but could be used for ill, he reports:
The end result is a spreadsheet of confirmed accounts for each name, perfect for targeted phishing campaigns or general intelligence gathering. Trustwave’s emphasis is on ethical hacking — using phishing techniques to highlight vulnerabilities that can then be fixed — but there are few restrictions on who can use the program. Social Mapper is licensed as free software, and it’s freely available on GitHub.
The Institute of the Future, a Palo Alto-based think tank, and the Tech and Society Solutions Lab, a Pierre Omidyar initiative, teamed up to create an ethics guidebook for tech companies, Arielle Pardis reports. Among other things, the guide encourages companies to examine their risk for disinformationa and propaganda.
Ezra Klein says that Twitter takes in-group conversations, makes them public, draws the outrage of people who lack all relevant context, and makes society worse as a result:
If you’re a conservative, the liberal tweets that get shot into your sightline aren’t the most thoughtful or representative missives; they’re the ones designed to make you think liberals hate you, are idiots, or both. The same is true if you’re a liberal: you see the worst of the right, not the best. And after you’ve seen enough of these kinds of comments from the other side, you begin to think that’s who they are, that you’re getting a true picture of what your opponents are really like, and what they really think of you — but it’s not a true picture, it’s a distortion built to deepen your attachment to your friends, your resentment of your opponents, and your engagement on the platform. And it’s one that plays on our tendencies to read the other side with much less generosity than we read our own side.
The very first tweet was sent in 2006. This is a young medium, and over time, we’ll (hopefully) figure it out — how to interpret it, how to couch it, how to delete old tweets automatically. But for now, the lesson is clear: #NeverTweet.
Shira Ovide wryly notes that Snap is reversing course on many of the things that made its business distinctive. (This is probably a good thing, at least from an investor standpoint.)
Now, though, Snapchat is borrowing liberally from the internet conventions it has scorned. Snapchat is — irony alert — copying Facebook by refashioning its advertising business for companies that want quick payoffs from their ads. It’s tracking people to prove those messages worked. And Snapchat loosened demands for tailor-made video programs, which makes it more like the rest of the web.
And finally …
Let’s check in real quick with the state of our artificial intelligience:
Selamat in Indonesian can be translated as both “safe” or “unhurt,” as well as “congratulations,” depending on the context. Because of this, Facebook’s algorithm misinterpreted comments expressing concern for the safety of people in Indonesia as messages of congratulations, triggering a festive animations of balloons and confetti to play whenever someone commented using the word.
“This feature (a text animation triggered by typing ‘congrats’) is widely available on Facebook globally, however we regret that it appeared in this unfortunate context and have since turned off the feature locally,” Lisa Stratton, a Facebook spokesperson, told Motherboard in an email. “Our hearts go out to the people affected by the earthquake.”
Fake balloons are not your friend!
Talk to me
Send me tips, comments, questions, @jack replies: email@example.com