Twitter’s abuse problems have been both ceaseless and well-documented in the years since the platform took off. Despite having a code of conduct in place to deal with trolls and harassers, Twitter has repeatedly failed to enforce its own rules, allowing countless users to suffer at the hands of anonymous jerks. Even worse, when users attempt to take matters into their own hands, Twitter blocks them instead of the ones abusing them.

That’s exactly what happened this week with “Imposter Buster,” a proactive bot that was doing its part in the ongoing fight against neo-Nazis. Built by journalist Yair Rosenberg and developer Neal Chandra, the Imposter Buster would seek out impersonator trolls and respond to them, exposing them for what they really were.

As Rosenberg explains in a column for The New York Times, impersonator trolls are some of the most insidious (and effective) trolls on Twitter. They find a photo of a Jewish, Muslim or African-American person, create an account using the photo and a series of stereotypical descriptors in the profile and proceed to seek out heated conversations, saying awful, bigoted, racist things in an attempt to defame an entire group.

Scrolling through the responses to a contentious tweet, the trolls hope that you’ll glance at a comment from “Rabbi Herschel Lieberman” saying something like “Nazis aren’t all bad” and come away with a bad taste in your mouth. But with a crowdsourced database of troll accounts at their disposal, Rosenberg and Chandra were fighting back.

At least, until the same Nazis they were trying to stop began reporting the Imposter Buster en masse, leading to a temporary suspension back in March, and a seemingly permanent one in December. In other words, as Rosenberg puts it, “Twitter sided with the Nazis.”

Twitter’s decision to shut down a bot that was combating hate speech and fraud pretty much sums up the company’s utter incompetence. At the end of the day, Twitter thrives off of controversy, and removing the most controversial users (or even upsetting them) is not a profitable decision. So rather than give users an opportunity to responsibly police a platform that Twitter refuses to keep safe, an anti-Nazi bot gets beaten by the Nazis it’s fighting.

Comments