YouTube still can’t stop child predators in its comments

YouTube faces a new wave of criticism about video and its remarkable commentary on children.

According to Matt Watson, the most recent concern started with Reddit posts and YouTube videos submitted to r / Drama and "exposed wormholes to YouTube's soft, pedophilia pedophile ring". Watson, a former YouTube creator who came back with one video and live stream, explores how exploited videos of uneasy children find videos like "bikini haul" that show the variety of bikinis she has bought. Videos are not pornography in nature, but the Description section is full of people who timestamp certain scenes in the video that make children or children sexier. There is also a comment section on how beautiful a pretty girl is.

Although Watson's videos are getting the mainstream attention, this is not the first time YouTube has handled this issue. We updated our policy to address the event called "ElsaGate" in 2017. In this event, children were encouraged by anxious and sexual child content. In the same year, YouTube decided to close the commentary section of the video with the children to prevent predatory behavior by pedophiles. In early 2013, we changed our search algorithms to prevent malicious content from appearing in search results on Google and YouTube. But despite centuries of public protest, YouTube still has not found a way to effectively deal with obvious predators on the platform.

The heart of the problem is YouTube's recommended algorithm, which has been widely criticized in the past. I watched a video of a girl playing in a video of a woman showing a bikini bought by Watson, and then double-clicked it. The video is not guilty, but the following description, including timestamps that invoke certain angles of the video and predatory responses to the image, is not clear.

"Youtube's recommended algorithms are based on the ability of pedophiles to communicate with each other, trade contact information and actual child pornography. "Watson wrote in Reddit. "With a YouTube account never used in Vanilla, you can have consistent access through harmless videos with less than five clicks in 10 minutes."

The Verge tried to reproduce several situations. I noticed that each experiment had six clicks or fewer clicks to look for videos with spoiled comments in the comments section. A YouTube spokesperson's statement indicates that several videos from Watson's videos have been removed.

"Everything that puts a minor in jeopardy (including comments) is disgusting, and YouTube has a clear policy to ban it," the spokesman said. "We actively enforce these policies, report them to relevant authorities, remove them from the platform and terminate their accounts. We continue to invest heavily in partnerships with technology, teams and charities to address this issue."

YouTube tried several enforcement tactics, but there were no scans for surface scans anymore. In November 2017, Johanna Wright, vice president of product management for YouTube, issued a blog post that informed her company about new ways to address this issue. In addition to removing flagged or publicly flagged videos, the direct attempt to prevent the spread of this video on YouTube is to close the Comments section. "

" The views of this nature are disgusting, I will report to the enforcement agency. " Wright wrote in 2017: "From this week on, we will take a more aggressive attitude by releasing all comments on the video of minors seeing this type of opinion."

While YouTube claims that most of these videos have opinions, Seems to be untrue. Most videos The Verge appeared with comments enabled. There are many videos in Watson's videos. These videos can be shared privately between users and are filled with timestamps or disturbing comments. YouTube uses a combination of Machine Learning Skills and Human Reviewers to discover and remove these videos, including those included in the Trusted Flagger Program. With 450 hours of content uploaded every minute and more users logging in each month, some videos will slip and fall.

Even in the case of a reliable flaggame, there is a roadblock that prevents quick response. One source who asked for anonymity said, "Content reported by a trusted reporting agency is not automatically removed or subject to differential policy. The same criteria apply to notifications from other users." "However, Because of their accuracy, the flags of Trusted Flaggers are prioritized for review, and they do not say what the reported content is about. "

One of Watson's biggest questions is that these videos are not Why it does not show. Most were uploaded by suspicious accounts. I had to know that something was wrong with the commentary section full of predatory comments. A YouTube spokesperson did not respond when The Verge presented this question. YouTube's Community Guidelines feature videos that can be used to sexually exploit children, but "uploading, commenting, or participating in sexual harassment activities may result in content being removed and your account being terminated."

Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. does not host or upload this material and is not responsible for the content.