Using AI to screen live video of terrorism is ‘very far from being solved,’ says Facebook AI chief

When faced with tough questions about how Facebook will eliminate terrorist content from its platforms, CEO Mark Zuckerberg offers a simple answer: artificial intelligence will do it. But according to Facebook's chief artificial intelligence scientist, Yann LeCun, Amnesty International lacks many years to take on the burden of moderation, particularly when it comes to projecting live videos.

Speaking at an event at Facebook's AI Research Lab in Paris last week, LeCun said Facebook was years away from using AI to moderate live video at scale, reports Bloomberg News .

"This problem is far from solved," said LeCun, who recently received the Turing Award, known as the Nobel Prize in computing, along with other AI luminaries.

Live video projection is a particularly pressing issue at a time when terrorists are committing atrocities with the aim of becoming viral. Facebook's inability to meet this challenge became terribly clear after the Christchurch shooting in New Zealand this year. The attack was broadcast live on Facebook, and although the company claims that it was seen by less than 200 people during its transmission, it was this broadcast that was downloaded and shared through the rest of the Internet.

The inability of automated systems to understand and block content like this is not a novelty for AI experts like LeCun. They have long warned that machine learning simply can not understand the variety and nuances of these videos. Automated systems are very good at removing content that humans have already identified as unwanted (Facebook says that it automatically blocks 99 percent of al-Qaeda's terrorist content, for example), but detecting examples never seen before is a very difficult task. more difficult.

One problem that LeCun noticed in Paris is the lack of training data. "Fortunately, we do not have many examples of real people shooting at other people," the scientist said. It is possible to train systems to recognize violence using film images, he added, but content containing simulated violence will be inadvertently removed along with reality.

Instead, companies like Facebook are focusing on the use of automated systems as assistants to human moderators. The AI ​​points out the problematic content and the humans review it manually. Of course, the human moderation system also has its own problems.

But remember: the next time someone presents AI as a silver bullet for online moderation, the people who are really building these systems know that it is much harder than that.

Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. does not host or upload this material and is not responsible for the content.