On Monday, Twitter announced new and stricter rules banning bigoted content and hate groups from its platform. It also said it would begin enforcing its anti-hate and violence rules more stringently than it has in the past.
The company is responding to pressure from its users, who have begged for both clearer rules and stronger enforcement for years.
“Freedom of expression means little if voices are silenced because people are afraid to speak up,” reads Twitter’s new hateful-conduct policy. That’s a new line for a company that had long insisted that—even in privately owned forums like its messaging service—only good speech could fight bad speech.
The new rules specifically ban content that includes “violent threats or multiple slurs, epithets, [and] racist or sexist tropes,” as well as material that “incites fear, or reduces someone to less than human.” It also prohibits groups that advocate violence against civilians.
Depending on how they’re interpreted, the new rules could give moderators a wide berth to suspend and ban users who encourage violence against civilians or propagandize for hate groups. The guidelines do not draw a distinction between user behavior on or off the site: If someone tweets only in coded language on Twitter, but calls for racial violence or genocide elsewhere on the web or in person, then they could still be banned from the service.
While logos or symbols affiliated with hate groups will not result in someone getting banned from the service, they will carry a sensitive media tag, meaning that they will not automatically display to the site’s users.
But “context matters when evaluating for abusive behavior,” warns Twitter, and they have included two big exceptions in the new precepts. First, their ban on advocating violence against civilians does not apply to “military or government entities.” Second, they may moderate their own rules if “the behavior is newsworthy and in the legitimate public interest.”
It’s not hard to figure out the famous Twitter user who might be most helped by those two loopholes.
So far, the two highest-profile users to get kicked off the service are Jayda Fransen and Paul Golding, the leaders of Britain First, an ultra-nationalist and virulently anti-Islam U.K. political party and “street-defense organization.”
Last month, President Trump retweeted a few of Fransen’s fake anti-Muslim videos to his more than 43 million followers. Theresa May condemned the president’s retweets, saying that Britain First spread “hateful narratives that peddle lies and stoke tensions.” Britain First has an estimated 1,000 followers in the United Kingdom.
These rules aren’t just an insurance policy for the company—they’ve already been used to shield the president from suspension. In September, when Trump warned in a tweet that “Little Rocket Man… won’t be around much longer,” the company said that the threatening tweets didn’t violate its guidelines because they were “newsworthy.”
Now the company has slapped on another policy, and Trump—and other government and military leaders—will get the same monopoly on violence on Twitter that they already enjoy out in the world.