The European Commission has slammed tech titans Facebook, Twitter, YouTube and Microsoft, saying that they are not moving quickly enough to deal with online hate speech.
In late May, Silicon Valley giants agreed in a “Code of Conduct” to keep tackling illegal hate speech on their platforms. As part of that code of conduct, a goal was set of reviewing most notifications of illegal hate speech and dealing with them within 24 hours. According to a new report, set to be presented and discussed in Europe on Wednesday and Thursday, tech companies are only meeting that 24-hour goal 40 percent of the time.
“If Facebook, YouTube, Twitter and Microsoft want to convince me and the ministers that the non-legislative approach can work, they will have to act quickly and make a strong effort in the coming months,” Vera Jourova, a commissioner in charge of justice, consumers, and gender equality for the European Commission, said according to the Financial Times.
The report is the result of a monitoring exercise of how hate speech is handled online. According to a European Commission official, there were 316 cases in which a non-profit organization flagged content as being hate speech and one of the tech companies responded. Forty percent of the time, the review happened within 24 hours, and over 80 percent of the time, it happened within 48 hours. Of those 316 cases, there were 163 removals.
The European Commission says that the 24-hour target is achievable but that the tech companies need to increase their efforts. The report was based on a review period of six weeks.
The rate at which illegal hate speech was removed varied considerably between countries. For example, Germany and France saw removal rates of over 50 percent, but in Austria that figure was 11 percent, and in Italy, just 4 percent, according a Commission official.
Anti-Semitism, national origin, and hatred towards Muslims were the three biggest categories that together represented the majority of notified content.
Facebook, Twitter, YouTube (owned by Google) did not immediately reply to requests for comment on this story from FoxNews.com. Microsoft declined to comment.
A Google official, however, said in a statement in the code of conduct in May that: “We’re committed to giving people access to information through our services, but we have always prohibited illegal hate speech on our platforms. We have efficient systems to review valid notifications in less than 24 hours and to remove illegal content.”