European Union regulators keep making headlines for their increased willingness to take big tech companies like Google to task over everything from perceived anti-competitive behavior to not moving fast enough to take down problematic content. It’s a bright line separating the hard-line approach in Europe from the more hands-off, light touch that U.S. regulators bring to the table against those same companies.
Indeed, the EU keeps turning up the heat, with the most recent push being word today that new rules are in the works that would slap big fines on social media giants like Facebook and Twitter if they’re found to be too slow in taking down terror-related content. EU Commissioner for Security Julian King gave a preview to the Financial Times of draft legislation apparently coming next month, new rules that would among other things have this as a centerpiece:
If law enforcement authorities find content published on a tech platform they mark as terrorist-related, that would start a clock for companies like Facebook. They’ll have one hour to take it down, or fines would kick in. “We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” King told the FT.
“The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.”
Any draft legislation would still need to win approval by a majority of the 28 member states in the EU, but that won’t likely be a problem. Germany, for example, has already introduced fines of up to the equivalent of $57 million for tech companies that let hate speech-related content linger, and British Prime Minister Theresa May is also on board with the sentiment.
For some context behind the EU’s push, the BBC cites a study published last month by the not-for-profit Counter Extremism Project, which found that between March and June, 1,348 videos related to ISIS were uploaded to YouTube from 278 accounts — videos that garnered more than 163,000 views.
The report goes on to say that 24 of the videos remained online for more than two hours. Another key fact the BBC notes: if the EU regs are approved, this will be the first time the European Commission has explicitly targeted tech firms over their handling of illegal content.
The companies themselves certainly know there’s a problem. Twitter, in its most recent transparency report, disclosed that between July and December of 2017, more than 274,000 accounts were permanently suspended for terrorism-related content violations related to the promotion of terrorism.
The EU’s new action comes in the wake of the levying in July of a record $5 billion fine against Google over claims that it behaves in an anti-competitive way in rules the company sets around its Android operating system.