Here’s how the EU plans to fight online terrorism content

Although acts of terrorism take place in the real world, they achieve a kind of future life online. Materials such as the recent Christchurch shooting proliferate as supporters upload them to any media platform they can access. Lawmakers in Europe have already had enough, and this year they hope to enact new legislation that holds big technology companies such as Facebook and Google responsible for any content related to terrorism they harbor.

The legislation was first proposed by the EU last September in response to the propagation of ISIS propaganda, which experts say has prompted further attacks. It covers recruitment materials such as demonstrations of the strength of a terrorist organization, instructions on how to carry out acts of violence and anything that glorifies the violence itself.

. An important part of the strategy of recruiting terrorists, say the supporters of the legislation. "Whether it was the Nice attacks, whether the Bataclan attack in Paris, whether in Manchester, […] they all had a direct link to extremist content online," says Lucinda Creighton, Senior Project Advisor Against Extremism (CEP), a campaign group that has helped shape the legislation.

The new laws require that platforms remove any content related to terrorism within one hour of issuing a notice, forcing them to use a filter to ensure that it does not reload. and, if they fail in any of these duties, allow governments to fine companies up to 4 percent of their annual total revenues. For a company like Facebook, which earned about $ 17 billion in revenue last year, that could mean fines of up to $ 680 million (around € 600 million).

Advocates of the legislation say it is a set of common sense proposals that are designed to prevent extremist online content from becoming real-world attacks. But critics, including think tanks of Internet freedom and large technology companies, claim that the legislation threatens the principles of a free and open Internet, and can endanger the work carried out by anti-terrorist groups.

Currently, proposals are making their way through the committees of the European Parliament, so many things could change before legislation becomes law. Both sides want to strike a balance between allowing freedom of expression and stopping the spread of extremist content online, but they have very different ideas about where this balance lies.

Why is legislation necessary?

Terrorists use social media to promote themselves, just like big brands. Organizations like ISIS use online platforms to radicalize people around the world. These people can travel to join the ranks of the organization in person or commit terrorist attacks in support of ISIS in their countries of origin.

In its heyday, ISIS had a devastating and effective social media strategy that instilled fear in its enemies and recruited new supporters. In 2019, the physical presence of the organization in the Middle East has been almost eliminated, but supporters of the legislation argue that this means that there is an even greater need for stricter online standards. As the physical power of the group has diminished, the war of ideas online is more important than ever.

"Each attack in the last 18 months or two years or so has an online dimension." Either to incite or in some cases give instructions, give instructions or glorify, "said Julian King, a British diplomat and commissioner European Security Union, to The Guardian when the laws were proposed for the first time.

who has been a driving force behind the new legislation within the European Union, says that the frequency of each The biggest time terrorists "self-radicalize" through online material shows the importance of proposed laws.

Why a one-hour demolition limit?

One-hour demolition is one of the two main obligations for technology companies proposed by the legislation.

According to the proposals, each EU member state designate a so-called "competent authority". It is up to each member state to decide exactly how this body works, but the legislation says they are responsible for marking problematic content. This includes videos and images that incite terrorism, which provide instructions on how to carry out an attack or that otherwise promote participation in a terrorist group.

Once the content has been identified, this authority will send a removal order to the platform that hosts it, which can then delete it or disable access by any user within the EU. Either way, action must be taken within the hour after the issuance of a notice.

It's a tight time limit, but removing content quickly is important to stop its spread, according to Creighton.

Creighton says that the organization's research suggests that if the content is left for more than an hour, "your audience will multiply by ten." Although this research focused on YouTube, the legislation would apply the same time limit on all social media platforms from important sites such as Facebook and Twitter, to smaller sites such as Mastodon and, yes, even Gab.

This obligation is similar to the voluntary rules that already exist and that encourage technology companies to eliminate content marked by the police and others. Trusted agencies in one hour.

What is new, however, is the addition of a legally mandated filtering filter, which would hypothetically prevent the same pieces of extremist content from being reloaded continuously after being marked and deleted, although sometimes these filters have been easy to omit in the past.

"The frustrating thing is that [extremist content] has been marked with technology companies, it has been removed and reappears a day or two or a week later," says Creighton, "that has to stop and that's what this Legislation objectives ".

The filter proposed by Creighton and his colleagues would use software to generate a code known as "hash" for any extremist content when identified by a human moderator. This means that any content that is loaded in the future can be quickly compared to this hashes database and blocked if a match is found.

Creighton says that this type of software has been instrumental in stopping the spread of child abuse content online, and a similar approach could work for extremist content.

However, identifying extremist content is not the same as identifying child abuse content. There is no legitimate use of videos that show child abuse, but some extremist content may be newsworthy. After the recent shooting in Christchurch, for example, the YouTube moderation team had to manually check the reruns of the shooter's images to make sure that the news coverage with the material was not inadvertently blocked.

So, what's the problem?

Critics say that governments can use the cargo filter to censor their citizens, and that the aggressive elimination of extremist content could prevent non-governmental organizations from documenting events in parts devastated by the war

One prominent opponent is the Center for Democracy and Technology (CDT), a group of experts funded in part by Amazon, Apple, Facebook, Google and Microsoft. Earlier this year, he published an open letter to the European Parliament, saying that the legislation "would lead Internet platforms to adopt unproven and poorly understood technologies to restrict online expression." The letter was jointly signed by 41 activists and organizations, including the Electronic Frontier Foundation, Digital Rights Watch and Open Rights Group.

"These filtering technologies are certainly being used by large platforms, but we do not believe it is correct that the government obliges companies to install technology in this way, "CDT European affairs director Jens-Henrik Jeppesen told The Verge in an interview.

Remove certain content, even if A human moderator has rightly identified him as extremist in nature, it could prove disastrous for the human rights groups that rely on them to document the attacks, for example, in the case of Syria's civil war, the recording of the conflict is a of the only ways to prove when human rights violations occur, but between 2012 and 2018, Google eliminated more than 100,000 videos of attacks is that they were carried out in Syria's civil war, which destroyed vital evidence of what happened. The Sirio Archive, an organization that aims to verify and preserve the images of the conflict, has been forced to make backup copies of the images on their own site to prevent the records from disappearing.

Opponents of legislation like the CDT also say that filters could end up acting as YouTube's frequently criticized content identification system. This identification allows copyright owners to archive deletions in videos that use their material, but the system sometimes eliminates videos published by their original owners, and may mistakenly identify the original clips as intellectual property. It can also be easily mocked.

Opponents of the legislation also believe that current voluntary measures are sufficient to stop the flow of terrorist content online. They claim that most of the terrorist content has already been removed from the main social networks, and that a user would have to make an effort to find content on a smaller site.

"It is disproportionate to have new legislation to see if the remaining 5 percent of the available platforms can be disinfected," says Jeppesen.

However, Creighton says that all social networks, regardless of their size, must meet the same standards and that these standards must be decided democratically. At this time, each social network has its own internal tools and processes that it uses to moderate content, and there is very little public information about them.

At this time, "all technology companies basically apply and adhere to their own rules," says Creighton. "We have zero transparency."

According to the proposals, all technology companies could be forced to use the same filtering technology. That means they would benefit by sharing the results between platforms, between EU member states and with law enforcement agencies such as Europol. That's great if you believe in the EU's ability to enforce the rule of law, but it has the potential to block non-governmental agencies like the Sirius Archive if governments do not give them the authority to access extremist content.

These organizations need to be able to see this content, no matter how troubling it may be, to investigate war crimes. Their independence from governments is what makes their work valuable, but it could also mean that they are excluded from the new legislation.

Creighton does not believe that free and public access to this information is the answer. She argues that the need to "analyze and document the recruitment for ISIS in East London" is not a good excuse to leave content on the Internet if the existence of that content "leads to a terrorist attack in London, Paris or Dublin" .

What happens next?

Legislation is currently underway in the European Parliament, and its exact wording could still change. At the time of publication, the main committee of the legislation must currently vote on its report on the draft regulation on April 1. After that, it must go through the trilogy phase, where the European Commission, the Council of the European Union and the European Parliament debate the content of the legislation, before it can finally be approved by the European Parliament.

Because the bill is far from being approved, neither its opponents nor its supporters believe that the final vote will take place before the end of 2019. This is because the current mandate of the European Parliament ends next month and elections must take place. the place before the next quarter begins in July.

That moment means problems for the bill. The UK is still planning to leave the EU this year, and a major force behind the bill has been British diplomat Julian King. If Brexit passes, he will no longer be involved. To complicate matters further, the MEP who chairs the lead committee on legislation, Claude Moraes, is also British.

The departure of King and Moraes from the EU government is unlikely to end the bill, but Creighton suggests that it could reduce the political momentum of the legislation.

"I think the goal now must be for Julian King to get this, as far as possible, before he leaves office, and then wait for the next parliament to resume it very quickly," he says.

If the events of the last month have taught us Whatever, it is that the main platforms are not prepared so that terrorists and their supporters can manipulate them with floods of extremist content. The EU has the size and scale to really intervene, but there is a thin line between help and the ability of a platform to solve its own problems.

Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. igetintopc.com does not host or upload this material and is not responsible for the content.