Facebook took down a post by the Anne Frank Center for showing nude Holocaust victims

On Monday, The Anne Frank Center for Mutual Respect found itself on the wrong side of Facebook’s moderation system. The group had attempted to share a new study on Holocaust awareness in America — but while the post went through fine on Twitter, it was abruptly blocked by Facebook. The Center wrote back demanding an explanation, but received nothing in response.

After two days of silence from Facebook, the company publicly called out Facebook for the misstep. “You removed our post promoting the need for Holocaust Education for apparently violating community standards,” the Center wrote on Twitter Wednesday morning. “You haven’t given us a reason.”

The image attached to the post depicts a group of nude, emaciated children, which seems to have triggered Facebook’s nudity policy, although the age and poor quality of the image obscures much of the scene.

Facebook restored the post a few hours after the Frank Center’s public statement, and has apologized for the error. “As our Community Standards explain, we don’t allow people to post nude images of children on Facebook,” a company representative explained to The Verge. “We recognize that the image shared by the Anne Frank Center is historically significant and important, and was restored on this basis.”

Reached for comment, the Anne Frank Center’s Alexandra Devitt struck a troubling contrast with Facebook’s lax approach to pages that deny the Holocaust. “While Facebook removes the AFC’s post promoting the need to educate on the past, it continues to allow pages and posts that directly deny the reality of the deaths of more than six million people,” Devitt told The Verge. “If Facebook is serious about its community standards it should start tackling Holocaust denial and not the organizations who are trying to educate people on discrimination, facts, and history.”

A small organization with few staffers and an ambiguous connection to the Frank family, the Center has gained prominence in recent years for its willingness to take on the Trump administration and other right-wing groups. That activism has included direct confrontations with Facebook, with one campaign gaining more than 180,000 signatures for a Change.org petition demanding the company take down pages promoting Holocaust denial.

This isn’t the first time Facebook has struggled with historically important images involving nudity. In 2016, the platform blocked an iconic news photo from the Vietnam War, showing a naked nine-year-old girl fleeing a napalm strike. Facebook restored the photo after a public outcry, citing its historical importance.

Alleged Facebook scammer arrested in Ecuador after three years on the run

Sometime in April 2003, Paul Ceglia and Mark Zuckerberg did business together and signed a contract. According to Zuckerberg (and eventually, federal prosecutors), it was a simple work-for-hire programming job — but in 2010, Ceglia went to court arguing the contract entitled him to half of Facebook, already worth billions.

With the Winklevoss settlement still fresh, it may have seemed like a quick path to a payoff, but Facebook refused to play ball. The case stalled, and soon Ceglia was charged with fraud for falsifying documents and placed under house arrest. Dropped by eight different lawyers, Ceglia faced legal defeat after legal defeat, eventually scrambling simply to stay out of prison.

Then, in a move almost no one expected, Ceglia disappeared. In March 2015, Ceglia slipped off his angle bracelet and disappeared, together with his wife, their two children, and their dog. In an email to Bloomberg months later, he said he had escaped because he was in fear for his life. “I felt I had no one in government I could trust,” he wrote. “An opportunity presented itself, so I MacGyver’d some things together and started running for my life.”

Now, more than three years later, Ceglia may finally be returning to the United States. Reuters is reporting that the alleged scammer has been apprehended in Ecuador, and is currently awaiting extradition to stand trial for the fraud charges. His lawyer told the newswire that he was relieved to learn Ceglia was safe, and there was still a “strong case” for his client’s innocence.

Why Facebook needs a Supreme Court for content moderation

What belongs on Facebook? It’s a central question in our current reckoning over social media, and given the vastness of the company’s platform, it can be exceedingly difficult to answer. “Fake news is not your friend,” the company says — but you can still post as much as you want. Alex Jones’ conspiracy theories, which inspired years of harassment against the parent of Sandy Hook shooting victims, were fine until they suddenly weren’t. Everyone seems to agree that terrorism does not belong on Facebook, though there’s still more of it there than you might expect.

But imagine you could start from scratch. What would you rule in, and what would you rule out? That’s the frame of this new episode of Radiolab, which chronicles the evolution of Facebook’s content policy from a single sheet of paper into 27 pages of comically specific rules about nudity, sex, violence, and more.

The full hour-long podcast is well worth your time. It examines three moderation debates, of escalating seriousness. The first is about when it is appropriate to show breastfeeding — an area in which Facebook has gradually become more liberal.

The second is about when you can criticize what the law calls a protected class of people — a gender, or a religion, for example. This is an area where Facebook has generally gotten more conservative. At one time, criticism of “white men” was prohibited — both words there are protective categories — while “black children” was not. The reasoning was that “children” is a non-protected class, and you can say anything about a non-protected class, as Facebook has no way of knowing whether their race has anything to do with your antipathy.

“If the rule is that any time a protected class is mentioned it could be hate speech, what you are doing at that point is opening up just about every comment that’s ever made about anyone on Facebook to potentially be hate speech,” producer Simon Adler says on the show.

This policy has since been changed, and black children are now protected from the worst forms of hate speech. “We know that no matter where we draw this line, there are going to be some outcomes that people we don’t like,” Monika Bickert, Facebook’s head of product and counterterrorism policy, told Adler. “There are always going to be casualties. That’s why we continue to change the policies.”

The third debate is the one I found most compelling. It’s a tale of two content moderation decisions, made six months apart in 2013. The first came after the Boston Marathon bombing, when images of bombing victims were posted on Facebook. At the time, the company’s policy on carnage was “no insides on the outside” — which photos from the bombing clearly violated. Adler’s anonymous former moderators told him that after some debate, an unknown Facebook executive said the pictures should remain, because they were newsworthy.

Six months later, Facebook faced a similar dilemma in Mexico, where the government and the cartels were locked in a bloody conflict. Users began posting a video of a woman being beheaded — a particularly newsworthy video, given that the government had been publicly denying reports of cartel violence. But in this case, another unnamed executive called for the video to come down. The decision led to departures on the moderation team, a former moderator says:

I think it was a mistake. Because I felt like, like why do we have these rules in place in the first place? And and it’s not the only reason, but it’s decisions like that that are the thing that precipitated me leaving.

Five years later, the company has tasked itself with making decisions like these at a global scale. It vastly expanded — and this year made public— the community guidelines by which it makes these decisions. And it committed to hiring 20,000 new employees to work on safety and security. Adler puts it this way:

Essentially what Facebook is trying to do is take the First Amendment, this high-minded principle of American law, and turn it into an engineering manual that can be executed every four seconds, for any piece of content happening anywhere on the globe.

He then cuts to a former moderator in the Philippines. Her colleagues would frequently approve content without really studying, she says, in protest of the relatively low rate of pay — about $2.50 an hour when she worked there, she says. She also largely relied on her gut, she says, erring on the side of removing even innocent nudity. “If it’s going to disturb the young audience, then it should not be there,” she says.

What to make of all this? Radiolab ends on an uncharacteristically bleak note: “I think they will inevitably fail, but they have to try, and I think we should all be rooting for them,” Adler says.

But this sentiment assumes Facebook’s system of content moderation will never evolve beyond its policy handbook. In fact, the company has already given us at least two ideas for how it might change.

One, Facebook could expand the avenues that users have to appeal moderation decisions. It started to do this in April, I reported at the time:

Now users will be able to request that the company review takedowns of content they posted personally. If your post is taken down, you’ll be notified on Facebook with an option to “request review.” Facebook will review your request within 24 hours, it says, and if it decides it has made a mistake, it will restore the post and notify you. By the end of this year, if you have reported a post but been told it does not violate the community standards, you’ll be able to request a review for that as well.

That same month, Mark Zuckerberg told Ezra Klein that he could imagine Facebook one day having an independent Supreme Court to make moderation decisions:

Over the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

What Facebook is describing with these ideas is something like a system of justice — and there are very few things it is working on that I find more fascinating. For all the reasons laid out by Radiolab, a perfect content moderation regime likely is too much to hope for. But Facebook could build and support institutions that help it balance competing notions of free speech and a safe community. Ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.

Democracy

Google’s Brin Cops to Plan to Reclaim Lost Decade in China

On Friday I wrote about how Google’s developing plans to re-enter China could trigger a crisis at the company. Later in the day, based on interviews with employees, Ellen Huet shed valuable new light on the entire story. Among the key insights here: a lot of Googlers wish they had never abandoned China in the first place; CEO Sundar Pichai believes Google entering China could have an unspecified “positive impact” on the country; and cofounder Sergey Brin — who led the charge to leave the country initially — is basically neutral now. This is also worth contemplating:

Now, the business case for engaging with China has grown, while the issue of censorship online has become more nuanced, according to the person. Germany has strong anti-hate speech rules, Thailand limits what can be said about its royal family online, and Europe has a right-to-be-forgotten law that lets people ask Google to remove old information about them from search results. To free-speech purists, these are also undesirable forms of online censorship, the person noted.

Facebook’s encryption fight will be harder than San Bernardino

Late on Friday, Reuters reported that the FBI is trying to compel Facebook to let it listen to voice conversations that took place on its Messenger app, as part of a criminal probe. My colleague Russell Brandom says Facebook will likely have a harder time fighting this than Apple did in 2016, when it successfully resisted similar pressure after the San Bernardino shooting:

There are crucial differences in this new case, and most of them are unfavorable to Facebook. While San Bernardino used a novel legal argument against a hardened device, Facebook’s case uses a well-tested legal procedure against a protocol that wasn’t built with this attack in mind. Not all encryption is the same, and every indication is that Facebook’s Messenger encryption simply wasn’t designed to maintain privacy in the face of a court-compelled wiretap. As a result, Facebook is facing a much tougher legal fight with a much less predictable result.

’We won’t let that happen:’ Trump alleges social media censorship of conservatives

The president once again tweeted, incorrectly, that social media companies are stifling conservative voices. (Reminder that Fox News got more engagement on Facebook in July than any other publisher.)

Facebook opens up to researchers — but not about 2016 election

David Ingram reports that academic access to data for research purposes will be restricted to content posted after January 1st, 2017 — after the 2016 election period that many researchers hoped to study.

Facebook Suspended a Latin American News Network and Gave Three Different Reasons Why

Facebook shut down the English-language page of Telesur, an organ of the Venezuela state media, and it’s not clear why, Sam Biddle reports.

In an emailed statement to The Intercept, a company spokesperson said, “The Page was temporally unpublished to protect it after we detected suspicious activity.” The term “suspicious activity” does not appear in Facebook’s terms of service. The spokesperson would not explain what “suspicious activity” was observed on Telesur’s page, or define the term, or explain why it was initially blamed on rule-breaking by Telesur and then technical problems on the social network’s end.

How the NY Times Omitted a Woman from a Silicon Valley Story

This month The New York Times Magazine published a cover story on a trio of activists who led an unlikely fight to pass data privacy legislation in California. (I wrote about at the time.) Kashmir Hill reports that another activist played a key role in the legislation’s passage: Mary Stone Ross, formerly of the CIA and House Intelligence Agency. But she had a falling out with the group’s leadership:

Personality conflicts inevitably happen in almost any workplace, including those of feel-good activists. Ross’s erasure from the lore of the law’s passage isn’t necessarily nefarious or a deliberate attempt to avoid giving a woman credit for her accomplishments. Those involved may have genuinely felt like she didn’t need to be mentioned because she didn’t support the compromise they’d made and wasn’t going to be part of the group moving forward.

“Mary Ross was an important part of the campaign team when we were all working full steam ahead to pass a ballot measure,” Robin Swanson, the campaign consultant for the group, told me by email. “Roles shifted when she made it clear she did not support a legislative compromise because she felt it wouldn’t go far enough.”

EU considers fines for tech companies that don’t remove terrorist content within an hour

The European Union is considering tough new laws that would force tech companies like Facebook and YouTube to delete terrorist propaganda from their platforms within 60 minutes or face fines. Note that tech companies are already reeling from a similar German law — and that one gives them 24 full hours.

QAnon and Pinterest Is Just the Beginning

Mike Caulfield looks at how Pinterest’s recommendation engine has been a boon to Qanon and other conspiracy theorists:

The UI-driven decontextualization that drove Facebook’s news crisis is actually worse here. Looking at a board, I have no idea why I am seeing these various bits of information at all, or any indication where they come from.

Facebook minimized provenance in the UI to disastrous results. Pinterest has completely stripped it. What could go wrong?

Twitter has a problem with ‘toxic’ content. CEO Jack Dorsey says he’s trying ‘everything’ to fix it

Jack Dorsey’s Look Busy 2018 tour stopped by CNN’s Reliable Sources over the weekend. In it, he acknowledged a “left-leaning” bias among Twitter employees, said that proactively moderating Twitter would be too expensive, and promised that the company is rethinking how it displays likes and retweets. If you work in communications at Twitter and want to walk me through the company strategy hear, I am all ears.

Elsewhere

How TripAdvisor changed travel

Linda Kintsler has a long piece on the history of TripAdvisor, and about how it, too, had no plan to deal with success. The site is beset by fake reviews and attacks from businesses who want bad reviews taken down. Worth reading through the prism of other platforms’ similar struggles:

On 1 November 2017, an investigation by Raquel Rutledge, a journalist at the Milwaukee Journal Sentinel, found that TripAdvisor had a habit of deleting posts detailing sexual assaults and other violent crimes on the grounds that they either violated the family-friendly policy, contained second-hand information, or hearsay, or they were deemed “off topic” by site moderators. “There’s no way to know how many negative reviews are withheld by TripAdvisor; how many true, terrifying experiences never get told; or for site users to know that much of what they see has been specifically selected and crafted to encourage them to spend,” Rutledge wrote.

On 7 November, TripAdvisor’s market value crashed by $1bn when its stock price dropped from $39 to $30 per share, its worst-ever day on the stock market. A couple of weeks later, the US Federal Trade Commission opened an ongoing investigation into the company’s business practices. “For a long time, [companies] could claim that their role was largely proactive, that all they had to do was put safeguards in place to reduce the risks of bad things happening,” says Botsman. “We’ve seen a massive pendulum swing – it’s now their responsibility when things go wrong. This is a whole new era of corporate accountability.”

Model Tinder-Scams Men for Date Competition in Union Square

One of my absolute favorite genres of content is “dating is a nightmare,” and Madison Malone Kircher has an absolute classic for us here:

The summer of scam has a new hero, and her name is Natasha Aponte. What did Ms. Aponte do to warrant this title? She used Tinder to con dozens of men into believing they were meeting her for a one-on-one date in Union Square. When the men arrived, they discovered that instead of a date … they’d be competing against each other to win it.

Adidas is partnering with Twitter to stream high school football games

Twitter is losing users and bleeding money, which means it’s time to invest in squints at notes broadcasting high school football games:

“Nationally ranked teams” from California, Nevada, Indiana, Georgia, and Florida will be part of the series, and will start on September 7th and will finish on November 9th. TechCrunch notes that NFL games have been popular on the site, and that this is the first time that high school games will be streamed in this fashion. The games will be available on @adidasFballUS on both mobile and desktop devices, and will be accompanied by a Twitter timeline with additional coverage and tweets.

How Facebook — yes, Facebook — might make MRIs faster

The NYU School of Medicine is giving Facebook an anonymous data set of 10,000 MRI exams in hopes that Facebook’s AI team can create a speedier version of the test, Matt McFarland notes. Please enjoy the (unintentional?) shade thrown here by the head of Facebook’s AI research group (emphasis mine):

Facebook started talking to NYU about the project last year because its AI team wanted to work on something with real-world benefits even as it performs basic research, said Larry Zitnick of the company’s Artificial Intelligence Research group. It plans to open-source any findings in the hope that sharing the data will encourage others to expand upon its work.

LinkedIn Will Allow Economics Researchers to Mine Its Data

LinkedIn is giving approved researchers access to anonymized data to help them study the economy, Jeremy Kahn reports:

The initiative, called the LinkedIn Economic Graph Program, is an expansion of an earlier collaboration with outside economics researchers that the company created in 2015. That effort resulted in several path-breaking findings, the company said.

For example, researchers from the World Economic Forum used LinkedIn’s data to explore the gender gap. Jessica Jeffers, an assistant professor of finance at the University of Chicago, used LinkedIn data to examine the impact of non-compete agreements, determining that they hurt new firms and entrepreneurship.

Jeffree Star, Laura Lee, Gabriel Zamora & YouTube’s racist tweet drama

A group of popular YouTubers got mad at each other and searched their Twitter histories for racist content — and found some!

Launches

Snapchat’s long-awaited redesign is smoother, can be enabled right now with roo

A faster Snapchat for Android is now in alpha, writes Richard Gao.

The SurfSafe Browser Extension Will Save You From Fake Photos

Issie Lapowsky reports on SurfSafe, a browser extension created by two UC Berkeley undergrads that helps find the origin of images on the internet. It’s useful for figuring out if something that is being presented as new is actually from another time or context — or is simply a hoax. Browser extensions are usually DOA, but can be useful in inspiring actual browser features. So, let’s CC the Chrome, Safari, and Firefox teams here.

Islands app for college students adds Facebook-like user directory

Kia Kokalitcheva writes about the relaunch of Islands, a college-focused social network that mimics aspects of Facebook, Snapchat, and Slack:

In the new version of Islands, users will be able to join and create group chat rooms on their campus, have a profile page that includes their Snapchat and Instragram handles, see other students who are nearby (within about 1 mile of them), and view a directory of students in their school who have signed up for the app.

Currently, 5-25% of students on active campuses are using Islands, according to Isenberg, and each user invites two others. At the end of this past spring semester, Islands’ users were sending thousands of messages per day, and Isenberg predicts that when the app rolls out in every U.S. college, users will be sending 2 million messages every day.

Giphy is launching its own take on stories with curated GIFs throughout the day

I read this story by my colleague Dami Lee and just screamed “why?!” the whole time. Say hello to the opposite of time well spent:

Now Giphy is announcing that it’s refreshing its homepage to prominently feature Stories, which will be curated by an editorial team. Stories will be centered around the day’s trending subjects, told through GIFs. One story will be published every hour, curated by categories of Entertainment, Sports, and Reactions.

Despite the obvious connection to Instagram Stories in its name, Giphy Stories are a little more like a cross between a Twitter Moment and Snapchat’s Discover content. There’s episode recaps like “The Bachelorette Finale in GIFs” which are reminiscent of Tumblr GIF sets, and reaction packs like “The best GIFs for your summer out of office email” which read like a Buzzfeed listicle in GIF form.

Takes

Jack Dorsey Breathes Life into the Right’s Favorite Twitter Conspiracy

Maya Kosoff says Twitter will never live down telling conservatives that it’s “left-leaning.” His words were not particularly well chosen, but (1) they are probably basically true, and (2) conservatives were going to say that whether or not Jack Dorsey ever did. That said, all this is all true:

Dorsey has spent much of the summer attempting to head off this type of criticism. In June, the Twitter C.E.O. dined at the upscale Georgetown restaurant Cafe Milano with a group that included White House communications adviser Mercedes Schlapp and Fox News commentator Guy Benson, in what quickly devolved into an airing of grievances. His most recent media tour began on Sean Hannity’sradio show, where he sought to reassure listeners that Twitter would not “shadow ban” them. Conservatives praised his transparency, and Hannity himself has since claimed to have a direct line of communication with Dorsey. But Dorsey should have known his time in the right-wing sun would be short-lived; the likes of Hannity and Jones have proven over and over again that they will never let up on the social-media giant, even when Twitter appears to skew explicitly in their favor. In admitting to “left-leaning” bias, and promising to stamp it out when enforcing rules, Dorsey effectively handed conservatives more ammunition, perpetuating the cycle that forces him to continually tiptoe around the right.

The Future of Privacy: Disinformation

Sam Lessin says disinformation is the (bleak!) future:

This is a strategy in general that we should all expect to see more and more in the world. It is, I would argue, the aggressively technologically correct strategy to run for the future. Don’t prevent leaks or try to lock down everything. Just build self-serving networks of people or bots to put out enough false information to obscure reality.

If you are a private person, don’t try to avoid having a social media profile. Instead try to have many fake ones, all sharing contradictory information about “you.”

And finally …

Fake Facebook adverts are making people double take all over London

An anonymous street is having a tremendous amount of fun with Facebook’s “fake news is not your friend” advertising campaign. The sentiments aren’t new, but the visual presentation is.

Talk to me

Send me tips, questions, comments, moderation policies: casey@theverge.com.

Facebook has started internal testing of its dating app

Two months after announcing the product at its F8 developer conference, Facebook is testing its dating product internally with employees. Independent app researcher Jane Manchun Wong, who regularly uncovers new Facebook features by scouring the source code, found evidence of the product Friday and posted it on Twitter. The company confirmed to The Verge that the product is in testing within the Facebook app but declined to comment further.

“This product is for US Facebook employees who have opted-in to dogfooding Facebook’s new dating product,” a screenshot reads, using slang for employees testing out their own software. “The purpose for this dogfooding is to test the end-to-end product experience for bugs and confusing UI. This is not meant for dating your coworkers.”

Facebook asked employees to use fake data for their dating profiles, and plans to delete all data before the public launch. “Dogfooding this product is completely voluntary and has no impact on your employment,” a screenshot reads, adding that the product is confidential. It also warns employees that its anti-harassment policies apply to the dating product.

Other screenshots show the sign-up flow for Facebook dating, including options to specify your gender, your location, and which genders you’re interested in matching with. Wong was able to fill in her own information but prevented from actually creating the dating profile.

The fact that Facebook Dating is testing internally does not necessarily mean that it will launch to the public. Products are often killed before they are released based on what companies find during testing.

Facebook’s launch of a dating service would make it an immediate powerhouse in the market for online romance. The stock price of Match Group, which owns popular dating apps including OKCupid and Tinder, plunged 17 percent the day Facebook Dating was announced.

As described on stage, Facebook Dating will allow you to create a separate profile for dating. When you and another person using the service like one another’s profiles, you’ll be allowed to contact them. The company also described a feature that would let you make your dating profile visible for people attending the same event as you, in hopes of generating more offline connections. “This is going to be for building real, long-term relationships — not just for hookups,” Mark Zuckerberg said in his announcement.

Update, 6:35 p.m.: This article has been updated to include Facebook’s comment and to clarify that the product is part of the flagship Facebook app.