Facebook bans military accounts in Myanmar as UN accuses leaders of coordinating genocide

Facebook has banned a number of high-profile accounts in Myanmar that it says helped “inflame ethnic and religious tensions” in the Southeast Asian country.

In a blog post, Facebook again admitted it had been “slow to act” to the situation in Myanmar, where the minority Muslim Rohingya population has been the target of a genocidal campaign fueled by propaganda spread on Mark Zuckerberg’s social network. In a report released by the United Nations today, investigators accused Myanmar’s military of orchestrating acts that “undoubtedly amount to the gravest crimes under international law,” including mass killings, gang rapes, and the destruction of entire villages.

Facebook cited the UN’s findings in its blog post, titled “Removing Myanmar Military Officials From Facebook.” The company described the ethnic violence in the country as “truly horrific” and said it wanted to “prevent the misuse of Facebook in Myanmar.” To this end, it has banned 18 Facebook accounts and 52 Facebook pages “followed by almost 12 million people.” These include the accounts of Myanmar’s commander in chief of the armed forces, Senior General Min Aung Hlaing, and the official Myawady military news network.

Experts have raised the alarm about Facebook’s role in fueling ethnic violence in Myanmar since at least 2014, noting how the site has been used to spread hoaxes, memes, and misinformation about the Rohingya population, as well as coordinate acts of mob violence.

Facebook’s response has been slow and uneven. Although the company has increased the number of local Burmese-speaking content moderators (from just two in early 2015 to 60 as of this year), it still has no official presence or staff in the country. It has blamed its inability to remove hate speech targeting the Rohingyas and other ethnic minorities partly on users failing to take advantage of its reporting tool, although, as The Guardian reports, these tools were only translated from into Burmese sometime in the spring of this year.

Human rights activists say the situation in Myanmar is extremely challenging, and it can be genuinely difficult to differentiate between Facebook users simply sharing information and those trying to inflame racial hatred. However, on-the-ground coverage from the country has been clear that Facebook has not done enough. As one local researcher told The New York Times this April, “You report to Facebook, they do nothing.”

Facebook deletes more than 600 accounts linked to new influence campaigns led by Iran and Russia

Facebook removed more pages today as a result of four ongoing influence campaigns on the platform, taking down 652 fake accounts and pages that published political content. The campaigns, whose existence was first uncovered by the cybersecurity firm FireEye, have links to Russia and Iran, Facebook said in a blog post. The existence of the fake accounts was first reported by The New York Times.

“These were networks of accounts that were misleading people about who they were and what they were doing,” CEO Mark Zuckerberg said in a call with reporters. “We ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.”

In July, FireEye tipped Facebook off to the existence of a network of pages known as Liberty Front Press. The network included 70 accounts, three Facebook groups, and 76 Instagram accounts, which had 155,000 Facebook followers and 48,000 Instagram followers. The network had undisclosed links to Iranian state media, Facebook said, and spent more than $6,000 between 2015 and today. The network also hosted three events.

Liberty Free Press also was linked to a set of pages that posed as news organizations while also hacking people’s accounts and spread malware, Facebook said. That network included 12 pages and 66 accounts, plus nine Instagram accounts. They had about 15,000 Facebook followers and 1,100 Instagram followers, and did not buy advertising or events.

A third aspect of the investigation centered on Iran-linked accounts and pages created in 2011 that shared posts about politics in the Middle East, United Kingdom, and United States. The network posted similar content across its network while obscuring connections between the pages, officials said. That campaign had 168 pages and 140 Facebook accounts, as well as 31 Instagram accounts, and had 813,000 Facebook followers and 10,000 Instagram followers. They spent more than $6,000 on ads between July 2012 and April of this year. Content posted by those pages is still under review, the company said.

The last part of the investigation centered on pages, groups, and accounts linked to Russian military intelligence. The Russian campaign appears to be unrelated to the Iranian one, the company said. Posts from the campaign focused on politics in Syria and Ukraine, but did not target the United States, Facebook said.

The company notified US law enforcement officials about the investigation and is working with them as it continues to review posts from the campaigns, it said.

Twitter also removed accounts after being tipped off by FireEye. In a thread on Twitter, the company’s safety team said it was cooperating with law enforcement.

Last month, Facebook announced it had identified suspicious accounts that were engaged in “coordinated inauthentic behavior,” which may have been intended to influence the US midterm elections. At the time, Facebook declined to specify which country or countries may have been behind the campaign, although officials said the campaign was consistent with previous Russian attacks.

That campaign included activity designed to inflame tensions around divisive topics including white supremacy the US Immigration and Customs Enforcement (ICE) agency.

Update, 8:27 p.m.: This article has been updated to include information about Twitter removing accounts.

The US government alleges Facebook enabled housing ad discrimination

Facebook has a new headache when it comes to housing advertisements. The federal government has filed charges (via Axios) that the social media site violated the Fair Housing Act by allowing ads to discriminate against some protected groups.

The US Department of Housing and Urban Development (HUD) filed its complaint last week over the company’s advertising practices, something that investigative reporting and nonprofit groups have alleged for the last two years. In 2016, a ProPublica investigation revealed that anyone advertising housing could discriminate on the basis of race, and a year later, a followup investigation found that Facebook hadn’t solved the problem. The company had updated its advertising policies, but despite those updates, discriminatory ads still made it through the company’s review process.

As a result, National Fair Housing Alliance and other groups filed a lawsuit against the Facebook in federal court in March, alleging that company had violated the 1968 law by permitting advertisers to discriminate. The company said that the lawsuit was “without merit,” but in July, it signed an agreement with the state of Washington to remove the ability for advertisers to exclude specific groups, including race, religion, sexual orientation, and others.

Now, the federal government is addressing the problem. In its complaint, it alleges that “Facebook unlawfully discriminates by enabling advertisers to restrict which Facebook users receiving housing-related ads based on race, color, religion, sex, familial status, national origin and disability. Facebook’s ad targeting tools then invite advertisers to express unlawful preferences by suggesting discriminatory options, and Facebook effectuates the delivery of housing-related ads to certain users and not others based on those users’ actual or imputed protected traits.”

It goes on to outline that the company “enables advertisers to discriminate” by showing ads only to men or women, and not showing ads to users “whom Facebook categorizes as interested in ‘assistance dog,’ mobility scooter,’ ‘accessibility,’ or ‘deaf culture,” as well as users who list things like childcare, parenting, religious interests, or various countries.

HUD describes the practice as widespread across the United States and ongoing, and notes that it’s impacted an “undetermined” number of aggrieved users. A Facebook spokesperson told Axios that “there is no place for discrimination on Facebook; it’s strictly prohibited in our policies. Over the past year we’ve strengthened our systems to further protect against misuse,” and that the company will “respond in court; and we’ll continue working directly with HUD to address their concerns.”

Twitter users are protesting Alex Jones with a viral block list

Last week, we talked about why Facebook banned Alex Jones — and Twitter didn’t. Facebook saw that Jones, who had already violated any number of the platform’s rules, had no intention of reforming himself. Twitter said first that Jones had not broken any rules; and then — after a CNN’s Oliver Darcy showed the company a series of offending tweets — that he had, but not enough to get banned.

Late on Tuesday, Twitter took another half-step toward banning Jones — suspending him for a week, after posted a video on Twitter in which he encouraged his followers to get their “battle rifles” in anticipation of all-out war with his enemies.

In the mind of Jack Dorsey, Twitter’s co-founder and CEO, this suspension represented an opportunity for Jones to reflect on his bad behavior. “I feel any suspension, whether it be a permanent or a temporary one, makes someone think about their actions and their behaviors,” Dorsey told NBC News’ Lester Holt, in one of two interviews he did on Wednesday.

In the spirit of thinking about their actions and behaviors, Jones’ crew more or less immediately posted the battle-rifles video to the separate Infowars account. That earned the Infowars account a weeklong suspension of its own. Twitter being Twitter, the offending video remained viewable on Twitter-owned Periscope for nearly a day afterward. (Elsewhere in Twitter being Twitter, the Jones account continued to tweet for some time after his suspension, because it turns out that if you schedule tweets to post before you get suspended those tweets will continue to post just fine.)

After introducing this round of half measures, Dorsey sat down with the Washington Post’s Tony Romm and Elizabeth Dwoskin to announce that he was “rethinking the core of how Twitter works.”

“The most important thing that we can do is we look at the incentives that we’re building into our product,” Dorsey said. “Because they do express a point of view of what we want people to do — and I don’t think they are correct anymore.”

A now common criticism of Twitter holds that the viral mechanics through which tweets spread encourage the polarization of the audience into warring tribes. (See this Ezra Klein piece from last week.) That’s one way to explain why malicious users like Jones are able to thrive on social networks: their bombastic speech attracts a wave of initial attention, and platform algorithms help them find a much larger audience than they ever would otherwise. It’s in this sense that “incentives built into the product,” as Dorsey calls them, bear reconsideration.

Dorsey has more ideas. Labeling automated bots to distinguish them from accounts run by real people, for example. Or this one, cribbed from YouTube:

One solution Twitter is exploring is to surround false tweets with factual context, Dorsey said. Earlier this week, a tweet from an account that parodied Peter Strzok, an FBI agent fired for his anti-Trump text messages, called the president a “madman” and has garnered more than 56,000 retweets. More context about a tweet, including “tweets that call it out as obviously fake,” could help people “make judgments for themselves,” Dorsey said.

This is all fine, so far as it goes. Along with other tech leaders, Dorsey is expected to testify next month at a Senate hearing about information campaigns in politics. It makes sense that the CEO of Twitter would seek to convey a sense of urgency around solving the problems that have bedeviled the platform for many years now.

And yet at the same time, Twitter has never lacked for ideas. Ask anyone who ever worked there: any feature suggestion you could offer had already been debated ad nauseam. The problem always came down to the details, to the implementation, to how you were going to ship the damned thing.

That’s why I can view Dorsey’s vague promises on Wednesday only through the prism of the Alex Jones saga. Twitter was the very last of its peers to take any action against the Infowars host, and even when it did decide to punish him, it did so in the most lenient possible terms.

It offered Jones a loophole that let him keep tweeting. It left the offending video up for many hours. And it promised Jones that he could return — and in just a week, too. Twitter knew it had to punish Jones for his behavior. The trouble, as always for this company, was in the details.

But as the company dithers, its users are organizing. This week, Grab Your Wallet founder Shannon Coulter had a viral Twitter threadsuggesting a concrete action Twitter users could take to protest Jones’ ongoing presence on the platform. Coulter organized a list containing the Twitter handles of the Fortune 500, then made them available as a collective block list. Protesters could install the block list with a couple of clicks, and once they have done so, any ads from those companies would not appear in their Twitter timelines.

As of yesterday, more than 50,000 people had installed her tool. Users have previously gifted Twitter the hashtag, the @ mention, and the retweet; Coulter may have just given us the viral block list. And while Twitter talks endlessly about what it might do someday, a growing faction in its user base is taking action right now.


In March, the United Nations said Facebook is used to incite violence against the Rohingya, a Muslim minority group. Ever since, regular reports have explored how Facebook failed to hire native-language speakers who could have identified hate speech on the platform as it began to spread, and ignored warnings from local groups and regional experts that the situation was getting out of hand.

Reuters’ Steve Stecklow has delivered the most comprehensive account yet of Facebook’s misadventure in Myanmar. His piece reveals the existence of Operation Honey Badger, a content moderation shop focused on Asia that is run by Accenture on Facebook’s behalf. Despite the efforts of its 60 or so moderators, Reuters easily found 1,000 pieces of anti-Rohingya hate speech on Facebook.

In part, that’s because Facebook’s vaunted artificial intelligence systems are failing.

In Burmese, the post says: “Kill all the kalars that you see in Myanmar; none of them should be left alive.”

Facebook’s translation into English: “I shouldn’t have a rainbow in Myanmar.”

So what happens next? Vice’s David Gilbert reports that Facebook is conducting a human rights audit “to assess its role in enabling ethnic violence and hate speech against its Rohingya Muslim minority.”

The audit, which Facebook confirmed, will be conducted by the San Francisco firm Business for Social Responsibility. Gilbert says the report could be finished by the end of this month. The company is also hiring for a variety of policy roles specific to Myanmar, a first for Facebook.

These are important steps, and while it’s unclear what action they might result in, they convey the appropriate degree of seriousness. Facebook — and the wider world — have a lot riding on whether the company gets it right. Activists have described similarly violent outbreaks of hate speechincluding Vietnam, India, Cambodia, and Sri Lanka. The conflict in Myanmar is bloody, but it is by no means unique.


How social media took us from Tahrir Square to Donald Trump

Zeynep Tufecki offers a concise history of how optimism around social media as a tool for peaceful protest faded into existential worries. Worth reading in full:

First, the weakening of old-style information gatekeepers (such as media, NGOs, and government and academic institutions), while empowering the underdogs, has also, in another way, deeply disempowered underdogs. Dissidents can more easily circumvent censorship, but the public sphere they can now reach is often too noisy and confusing for them to have an impact. Those hoping to make positive social change have to convince people both that something in the world needs changing and there is a constructive, reasonable way to change it. Authoritarians and extremists, on the other hand, often merely have to muddy the waters and weaken trust in general so that everyone is too fractured and paralyzed to act. The old gatekeepers blocked some truth and dissent, but they blocked many forms of misinformation too.

How a Fake Group on Facebook Created Real Protests

Sheera Frankel reports on a now-deleted Facebook page called Black Elevation, which organized rallies, posted videos, and spoke out about racism. In fact, it was part of the influence operation that Facebook revealed last month:

The Black Elevation organizers may have been trying to slide into the real world by hiring event coordinators or trying to persuade real activists to identify themselves as members of Black Elevation.

Mr. Nimmo said all of the pages Facebook recently removed were aimed at left-wing activists in the United States. It is possible, he added, that a similar influence campaign has been focusing on right-wing activists.

Americans don’t think the platforms are doing enough to fight fake news

Daniel Funke reports on a new survey published by Gallup and the Knight Foundation.

The report, based on web surveys from a random sample of 1,203 U.S. adults, found that 85 percent of Americans don’t think the platforms are doing enough to stop the spread of fake news. Additionally, 88 percent want tech companies to be transparent about how they surface content, while 79 percent think those companies should be regulated like other media organizations — a common trope among journalists.

That’s despite the fact that the majority of people surveyed (54 percent) said social media platforms help keep them informed and that they’re concerned about those companies making editorial judgments.

Transgender Girl, 12, Is Violently Threatened After Facebook Post by Classmate’s Parent

An Oklahoma school shut down after a Facebook group led to violent threats against a transgender student, Christina Caron reports:

A 12-year-old transgender student in a small Oklahoma town near the Texas border was targeted in an inflammatory social media post by the parents of a classmate, leading to violent threats and driving officials to close the school for two days.

It all started on Facebook. Jamie Crenshaw, whose children attend public schools in the town, Achille, complained in a private Facebook group for students’ parents that the transgender girl, Maddie, was using a bathroom for girls.


WhatsApp Co-Founder’s ‘Rest and Vest’ Reward From Facebook: $450 Million

Jan Koum has the best job in the world and it’s not even close. Bless Deepa Seetharaman and Kirsten Grind for this:

After WhatsApp co-founder Jan Koum announced he was leaving Facebook Inc. FB -0.87%in late April, he has continued showing up at least monthly at the social-media giant’s headquarters in Menlo Park, Calif. His incentive for making the appearances: about $450 million in stock awards, according to people familiar with the matter.

Mr. Koum’s unusual arrangement with Facebook is one of the more lucrative examples of a Silicon Valley practice sometimes called “rest and vest,” in which the holders of stock grants are allowed to stick around until they qualify to collect a sizable portion of their shares.

Meet The People Who Spend Their Free Time Removing Fake Accounts From Facebook

Craig Silverman introduces us to some heroes of the social realm. (Incidentally, they do not seem particularly impressed with Facebook’s efforts on this front. “It seems like every time we tell them something, they had no idea or didn’t know that was possible,” Denny said. “You can’t tell me that you don’t know some of this. I mean, this is your business, right? This is stuff me and Kathy are doing in our spare time because we are committed to it at this point. But every time Kathy tells them something, it’s like a revelation.”)

Kathy Kostrub-Waters and Bryan Denny estimate they’ve spent more than 5,000 hours over the past two years monitoring Facebook to track down and report scammers who steal photos from members of the US military, create fake accounts using their identities, and swindle unsuspecting people out of money.

During that time they reported roughly 2,000 fake military accounts, submitted three quarterly reports summarizing their findings to Facebook, and even met with Federal Trade Commission, Pentagon, and Facebook employees to talk about their work.

Google-Facebook Dominance Hurts Ad Tech Firms, Speeding Consolidation

The Google-Facebook advertising duopoly has led to consolidation in the ad tech industry, Claire Ballentine reports.

Instagram users are reporting the same bizarre hack

There’s some sort of ongoing Russian attack on individual Instagram accounts, Karissa Bell reports:

Megan and Krista’s experiences are not isolated cases. They are two of hundreds of Instagram users who have reported similar attacks since the beginning of the month. On Twitter, there have been more than 100 of these types of anecdotal reports in the last 24 hours alone. According to data from analytics platform Talkwalker, there have been more than 5,000 tweets from 899 accounts mentioning Instagram hacks just in the last seven days. Many of these users have been desperately tweeting at Instagram’s Twitter account for help.

Amazon Has YouTube Envy

Amazon-owned Twitch is ramping up competition with YouTube, Lucas Shaw reports:

Amazon in recent months has been pursuing exclusive livestreaming deals with dozens of popular media companies and personalities, many with large followings on YouTube. Twitch is offering minimum guarantees of as much as a few million dollars a year, as well as a share of future advertising sales and subscription revenue, according to several people who’ve been contacted by Twitch.


People Raise $300M Through Birthday Fundraisers in First Year

Birthday fundraisers in the News Feed are more than just an engagement hack — they’ve also raised $300 million for charity in a year, Facebook said today. The company also announced some user interface upgrades along with the announcement that show you more information about the nonprofits you’re donating to.


Twitter’s Misguided Quest to Become a Forum for Everything

John Herrman says Twitter’s notion of a universal public square is hopeless:

On Twitter, it may seem that you are talking to friends or peers, and that the space is controlled or even safe. But it’s not: It’s shared with and extremely vulnerable to those with a desire to disrupt or terrorize it. In order to function, Twitter must make its users feel at home in the most public space devised by humankind. The platform can’t easily say what smaller intentional forums can: “We don’t want this here; you’re violating the spirit of our community; go away.” It is too big, with too many people present for too many different reasons, to be a site for any one sort of conversation. It exercises absolute authority over its service, of course, but must pretend to do so carefully, sparingly and only when forced to.

And here’s a former (I think?) Twitter employee Jared Gaut has a thread worth reading on why he’s taking a break from the service in the wake of Alex Jones-related inaction:

And finally …

Jerry Seinfeld Says Jokes Are Not Real Life

Dan Amira asks Jerry Seinfeld why he doesn’t tell jokes on Twitter:

I don’t hear the laugh. Why waste my time? It’s a horrible performing interface. I can’t think of a worse one. I always think about people that write books. What a horrible feeling it must be to have poured your soul into a book over a number of years and somebody comes up to you and goes, “I loved your book,” and they walk away, and you have no idea what worked and what didn’t. That to me is hell. That’s my definition of hell.

Welcome to hell, Jerry!

Talk to me

Send me tips, questions, comments, human rights audits: casey@theverge.com.

Facebook secures deal to stream Champions League matches in Latin America

Facebook has secured the rights to broadcast UEFA matches in much of Latin America, the association said yesterday. This can give Latin American soccer fans the chance to tune into their favorite sport without the need to pay to watch on their local TV station.

The contract includes a free live stream of both the organization’s Champions League and Super Cup matches, adding up to 32 matches per season. A ton of Latin America’s top players take part in these leagues every year, making this partnership a big deal for many soccer fans.

It’s the latest in Facebook’s attempt to tackle the live sports streaming market, including its move in January to hire Peter Hutton, then the CEO of the TV network Eurosport. Facebook’s tapped similar deals for other sports in the past — it’s scored deals with other large sports orgs, like the NBA. But it’s still unclear if deals like these give the platform an edge over other social networks like Twitter, which secured its own MLB deal earlier this year, or streaming services like YouTube.

Facebook’s EUFA contract will run from 2018 to 2021, and is limited to Spanish-speaking countries. The deal started with this week’s Super Cup on August 15th. Facebook will also be sharing highlights every week there’s a match.

Facebook will start listing where its largest pages are managed from

People managing Facebook pages with large American followings will now need to take extra steps to verify their identity, according to an announcement made by the company today. Now, if these managers want to continue to post on their pages, they’ll need to complete an “authorization process” that includes enabling two-factor authentication and confirming what country they’re based in.

If a page manager needs to authorize their account, they’ll get a notice at the top of their newsfeed later this month, Facebook says. They won’t be able to post on their page until they completely this process. This information will be added into a forthcoming site section called “People Who Manage This Page,” which will list which countries these pages are being managing from. While this will initially appear on pages with a large US audience, it appears that Facebook will eventually be rolling out this feature across all pages.

When asked how large the pages had to be to fall under this category, Facebook declined to comment. “We aren’t sharing exact numbers, as bad actors may use it to game the system.” a spokesperson said. Facebook also said it would only be verifying page managers’ locations by checking the location on their phone, rather than requiring physical identification as it has with ad sales in some cases.

Facebook’s rolling out these changes as the company grapples with its platform’s role in interfering with the 2016 election. Their first change came back in April, when the company put up a requirement that politically minded advertisers needed to disclose their identity and location. This latest change, meanwhile, is focused on pages, not ads, which are an important slice of the disinformation network. While Russia’s Internet Research Agency was able to sow political discontent in part by paying $100,000 to place political ads on the platform, much of its disinformation came from a slew of fake accounts and pages. By requiring that page owners unmask their location, it’s possible that Facebook can close down any future meddling by any foreign groups.

Aside from the admin verification, other changes are coming to pages, too — most notably in the Info and Ads section. Users can now find out when a page merges with another page by looking into its history. According to Facebook, sister company Instagram will be rolling out similar features to unwrap information about profiles with large fanbases.

“Our goal is to prevent organizations and individuals from creating accounts that mislead people about who they are or what they’re doing,” Facebook said in a statement. “These updates are part of our continued efforts to increase authenticity and transparency of Pages on our platform.”

After a single day, Facebook is pushed out of China again

Just one day after Facebook gained permission to open a subsidiary in China, the government pulled the business filing and began to censor mentions of the news. An anonymous source tells The New York Times that Facebook no longer has permission to launch the startup incubator it had planned.

Facebook planned to open up a $30 million subsidiary called Facebook Technology (Hangzhou) and run a startup incubator that would have made small investments and gave advice to local businesses.

The sudden rejection stems from a disagreement between Chinese authorities, the source told the Times. Local officials in Zhejiang, an eastern province that houses Alibaba’s headquarters, gave Facebook the initial permission, but the Cyberspace Administration of China, Beijing’s internet regulator, had not.

According to screenshots of the business filing on the remaining social media posts that haven’t been censored, the subsidiary had been listed as wholly owned by Facebook Hong Kong Limited. Facebook does have a sales office in Hong Kong, which isn’t subject to the rules and censorship of the mainland. In a statement yesterday, the company told The Verge, “We are interested in setting up an innovation hub in Zhejiang to support Chinese developers, innovators and start-ups.”

This would have been the first time that Facebook successfully expanded into China after Beijing blocked the platform in 2009 following its use by Xinjiang independence activists in the Ürümqi riots. Facebook previously tried to open an office in Beijing in 2015 and got as far as obtaining a permit, but ultimately, it was unsuccessful, a pattern that seems to be echoed here. Last year, Facebook quietly launched an app in China called Colorful Balloons that let users share photos with friends. Oculus, Facebook’s VR company, also has an office in Shanghai.

Last week in an interview with Recode, chief executive Mark Zuckerberg expressed significant doubt that his company could successfully reach China. When asked where Facebook was on China, he responded, “I mean, we’re blocked.” He then elaborated on the grim situation: “I mean, we’re a long time away from doing anything. At some point, I think that we need to figure it out, but we need to figure out a solution that is in line with our principles and what we wanna do, and in line with the laws there, or else it’s not gonna happen. Right now, there isn’t an intersection.”

Alex Jones hit with 30-day Facebook suspension for bullying and hate speech

Conspiracy theorist Alex Jones has been suspended from Facebook for 30 days after violating the site’s community guidelines. According to multiple publications, Jones will be unable to use his personal account. But, reports TechCrunch’s Josh Constine, Facebook pages associated with Jones’ name (including “Alex Jones” and “The Alex Jones Channel”) will remain active, with administrators for the pages able to post new content.

In addition to the 30-day personal suspension, four videos were removed from the Facebook page of Jones’ site InfoWars and the page served with its first warning. Facebook said that InfoWars’ videos violated its community guidelines by encouraging physical harm against others and attacking individuals for their religious affiliation and gender identity.

Facebook’s actions follow the removal of four videos from Jones’ YouTube channel. Two videos contained hate speech against Muslims; a third against transgender people; and a fourth showed a man pushing a child to the ground with the headline “How to prevent liberalism.” It’s not clear if these videos are the same as those removed by Facebook.

TechCrunch reports that three of the videos removed by Facebook had been flagged by moderators for the first time this Wednesday, but a fourth was first highlighted in June and allowed to remain on the site. Citing a source at Facebook, TechCrunch says this last decision was “erroneous,” suggesting that the social media site’s moderation policies are not being followed consistently.

The suspension of Jones’ personal profile is the strongest rebuke the InfoWars creator has taken from Facebook. Earlier in the week, the social media company said a rant by Jones in which he accused special counsel Robert Mueller of raping children and mimed shooting the former FBI director did not violate its guidelines.

Facebook’s head of News Feed integrity, Tessa Lyons, later told reporters that Facebook believes it should limit the spread of hoax videos, but that it would only remove them completely when they created an imminent threat of harm. “We know people don’t want to see false information at the top of their News Feed,” said Lyons.