How extremism came to thrive on YouTube

A system designed to attract maximum user attention far exceeds all expectations, only to end up promoting dangerous misinformation and hate speech around the world. It is a story that we have often considered in the context of Facebook, which has responded to criticism with the promise of changing the very nature of the company. And it is a story that we have not discussed enough in the context of YouTube, which has promoted an equally disturbing network of extremists and foreign propagandists and has tended to intervene with caution and with overwhelming force.

Certainly, YouTube has received its share of criticism since the general recognition of social networks began in 2016. Google CEO Sundar Pichai was forced to answer questions about the video giant when he appeared before the Congress last year. But, overall, we've had little information about how YouTube makes high-level decisions on issues related to its algorithmic recommendations and its inadvertent cultivation of a new generation of extremists. What do employees say about the phenomenon of "bad virality"? YouTube's unmatched ability takes a piece of disinformation or hate speech and, using its opaque recommendation system, considers it the widest audience possible.

In an important new report for Bloomberg, Mark Bergen begins to give us some answers. With almost 4,000 words, it describes how YouTube attracted the attention of users with a determined zeal, suppressed internal criticism and even discouraged employees from looking for videos that violate its rules, for fear that the company will lose its safe harbor protections. The Law of Decency in Communications. Until 2017, YouTube CEO Susan Wojcicki allegedly pushed for a renewal of the company's business model to pay creators based on the attention they attracted, despite growing internal evidence that these metrics based on the commitment encourage the production of videos designed to outrage people. increasing the risk of violence in the real world.

Bergen reports:

In response to criticism of the prioritization of growth over security, Facebook has proposed a dramatic change in its core product. YouTube still has difficulty explaining any new corporate vision to the public and investors, and sometimes to its own staff. Five high-level employees who left YouTube and Google in the last two years privately cited the inability of the platform to tame extreme and disturbing videos as the reason for their departure. […]

YouTube's inertia brightened again after a deadly outbreak of measles drew public attention to vaccination conspiracies on social media several weeks ago. The new data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached more than 170 million viewers, many of whom later recommended other videos loaded with Conspiracy theories.

The Bergen story is, in a way, a mirror of the November New York Times story about how Facebook first ignored and then tried to minimize the warning signs on the unintended consequences of the platform. Both pieces illustrate the ugly way in which our social networks have developed: Phase one is a total war to win the user's attention and build an advertising business; Phase two is a delayed effort to clean up the many problems that come with the global scale faster than new ones can arise.

Like Facebook, YouTube has begun addressing some of the concerns raised by the employees who left. The most important thing is that, in January, the company said it would stop recommending what it calls "limit content," videos that almost do not meet community standards, but stop. Last year, he also began adding links to relevant Wikipedia entries in some common scams, such as videos that state that Earth is flat.

In South by Southwest, before announcing the Wikipedia feature, YouTube CEO Susan Wojcicki compared the service to a humble library – a neutral repository for much of the world's knowledge. It is a definition that attempts to turn YouTube into a noble civic institution and distort its power. After all, most libraries do not send members a more radical version of the book they were reading as soon as they finished the last one.

An extremist who has used the platform with agility in recent years is Tommy Robinson, a far-right activist who previously led an Islamophobic and anti-immigration organization in the United Kingdom. The publications against Robinson's Islam were harmful enough that he was banned from Instagram and Twitter last week. Today, YouTube decided to let it keep its account and its 390,000 subscribers, informs Mark DiStefano:

While YouTube does not reach a total ban, the restrictions will mean that Robinson's new videos will not have counts, suggested videos, likes, or comments. There will be an "interstitial board" or black board, which appears before each video warning people that it might not be appropriate for all audiences.

Robinson will also not be able to broadcast live to his channel.

These tools may remind Pinterest's approach to misinformation against vaccines, about which I wrote in February. Robinson will get his freedom of expression (he can still upload videos), but he will be denied what Aza Raskin has called "freedom of reach". It is an approach that, in general, I am in favor of. And yet, I still shudder at another revelation of Bergen's report: an internal YouTube tool created by a dissident showed that far-right creators such as Robinson have become a pillar of the community:

An employee decided to create a new YouTube "vertical," a category that the company uses to group its mountain of video sequences. This person gathered videos under an imagined vertical for the "superior right", the political group linked to Trump. Based on the compromise, the hypothetical right-of-law category joined music, sports and games as the most popular YouTube channels, an attempt to show how critical these videos were to the YouTube business

. Some of YouTube's initiatives to reduce the spread As extremism is in its early stages, there remains a worrying amount on the platform. Here is Ben Makuch in Motherboard today:

But even in the face of those horrific terrorist attacks, YouTube remains a bastion of white nationalist militancy. In recent days, Motherboard has seen white and neo-Nazi nationalist propaganda videos on the website that have not been detected by YouTube, have been allowed to remain on the platform or have recently been uploaded.

When there were examples The company showed it specifically to YouTube by Motherboard, the company told us that it demonetized the videos, placed them behind a content warning, eliminated some features like "I like" and comments, and eliminated them of the recommendations, but finally decided to leave the videos online. The videos are still easily accessible through the search.

Last month, when I wrote about the difference between platform problems and Internet problems, I noticed that the definitive answer we are looking for is how free the Internet should be. The opening of YouTube has benefited a large and diverse group of creators, most of whom are innocuous. But when reading today about Cole and Savannah LaBrant, famous Internet parents who tricked their 6-year-old daughter into believing they were giving away their puppy and filming their reaction, it's fair to ask why YouTube so often takes its creators to The madness. 19659019] Extremism in all its forms is not a problem that YouTube can solve alone. However, what makes the Bergen report so disturbing is the way YouTube unwittingly promoted the extremists until they became one of their most powerful groups. In a very real way, extremism is a pillar of the platform, and unraveling the best of YouTube from its rotten heart promises to be as difficult as anything the company has done.

Democracy

As India Votes, False Messages and Hate Speech Flummox Facebook

India has seen a lot of false news before its next election, Vindu Goel and Sheera Frenkel report:

The avalanche of False messages gave Facebook an idea of ​​what will come when India prepares for the world's largest election. Prime Minister Narendra Modi and his Bharatiya Janata party are seeking another five years in power, and up to 879 million people are expected to vote for five weeks starting on April 11.

But as the campaign gets underway, Facebook is already struggling to deal with disinformation and hate speech on its central social network and on WhatsApp, its popular messaging service.

What happens next in the case of housing discrimination against Facebook?

Adi Robertson examines HUD's legal strategy in its lawsuit against Facebook:

HUD is also making some additional claims that could complicate Facebook's defense. In addition to calling tools that allow advertisers to select audience categories, it is condemning the invisible process Facebook uses to serve ads. "[Facebook’s] the ad serving system prevents advertisers who want to reach a broad audience of users from doing so," he says, because it is likely to stay away from "users who are determined by the system to be unlikely to participate. in the ad, even if the advertiser explicitly wants to reach those users. "

HUD does not have to establish that these segmentation algorithms are designed to avoid showing ads to certain protected classes. You just have to show that the system makes housing less accessible to these people, a concept known as disparate impact. "If there is an algorithm that discriminates against racial minorities or gender minorities or whatever, I think it would still be problematic," says Glatthaar's colleague Adam Rodriguez. He compares the change to a zoning restriction whose text and intent are neutral, but which result directly in fewer black residents, which would probably still be considered discriminatory.

Facebook's new tools to block discriminatory ads will not apply outside the United States

Catherine McIntyre reports that Facebook's announcement that it will prevent advertisers from discriminating against certain protected categories only applies in the United States: [19659030] The social media giant said it would block the functions that allow them to discriminate by age and gender two weeks ago. However, the changes will only apply in the United States. And, tests conducted by The Logic show that Facebook is currently approving ads in Canada that seem to discriminate.

Members of the Googlers AI advisory board protest anti-LGBT and anti-immigrant comments

Ina Fried reports that Google has no plans to reverse its decision to put the president of the Heritage Foundation, Kay Coles James, who has done anti-LGBT and anti-immigrant comments, on a key AI advisory panel.

Google staff condemns the treatment of temporary workers in a show of historical & # 39;

More than 900 employees have signed a letter criticizing the treatment of contractors, reports Julia Carrie Wong:

In March, Google abruptly shortened the contracts of 34 temporary workers on the team of "personalities" for the Google Assistant : the digital assistant similar to Alexa that reads the weather, manages your calendar, sends a text message or calls Uber through your phone or smart speaker.

The cuts, which affected contractors around the world, revitalized the debate over TVC's extensive use of Google, in the midst of a growing labor movement within the company. In recent months, Google's FTE and TVC have increasingly protested about their working conditions and the ethics of their employer.

Elsewhere

Inside Grindr, the fear of China wanting to access user data through HIV research

Tim Fitzsimons reports that after its acquisition by a Chinese company , Grindr considered sharing HIV data with the country. It is not clear what China would have done with the data:

On July 3, 2018, Chen informed three Grindr employees that Yiming Shao, an HIV researcher equivalent to the US Centers for Disease Control and Prevention . UU., I was interested in working with Grindr. To facilitate this project, Chen wrote an email to employees, obtained by NBC News, which suggested putting a full-time "intern" at Grindr's headquarters in West Hollywood, California, to investigate and work on a document on prevention. of HIV that would be published jointly with the company.

"They are attracted to our brand, reach and data," Chen wrote in the email. "We need to be extremely careful with your data request, Yiming is the head of HIV prevention in China CDC, we can not let people say that it's about sharing user data with the Chinese government." "

Quibi Taps Tom Conrad, a Snap and Pandora Alum, as Product Director

Tom Conrad, who directed the product at Snap, has taken on a similar role at Quibi, the short-form subscription video company. Jeffrey Katzenberg

Releases

WhatsApp launches the fact-checking service in India before the elections

WhatsApp is launching a fact-checking service in India before the country's next elections:

Reuters reports that users can now forward messages to the Tipline Checkpoint, where a team led by the local startup Proto will evaluate and mark it as "true", "false" "Deceptive" or "disputed". These messages will also be used to create a database to study and understand the spread of misinformation. Elections in India will begin on April 11 and final results are expected on May 23.

You've heard about fake news: what about fake devices? My colleague Ashley Carman has a great new series on YouTube and in the first episode he writes about the wild world of imitations. Take a look:

The worst nightmare of a gadget maker …

Take

The constant closings of Google products are damaging your brand

Google closed Google+ and Inbox today, and Ron Amadeo does not he's happy with that:

We're 91 days into the year, and so far, Google is accumulating an unprecedented body count. If we take the official closing dates that have already occurred in 2019, a Google branded product, feature or service died, on average, approximately every nine days.

Some of these product closures have transition plans and some of them (like Google+) represent Google completely abandoning a user base. However, the details are not crucial. What matters is that each of these actions has a negative consequence for the Google brand, and the almost constant flow of closing ads makes Google appear more unstable and unreliable than ever before. Yes, it was the only time Google killed Google Wave nine years ago or when it took Google Reader six years ago, but things have never been so bad.

And finally …

Google begins to close its flaw in Google+ social network

People are still starting to use social networks every day and, for founders who are wondering if they are seeing some traction, I invite you to see if your application approves what I like to call the Google+ test for user participation. (The emphasis is mine)

Google has acknowledged that Google+ did not meet the expectations of the company in terms of user growth and overall collection. "While our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved widespread adoption by consumers or developers, and has seen limited interaction among users with the applications, "wrote Ben Smith of Google in October. Then he revealed a pretty damning statistic for the current service situation: "90 percent of Google+ user sessions are less than five seconds" "

RIP at peace Google + !!!

Talk to me [19659061] Send me advice, comments, questions and their corrections on YouTube: [email protected]

Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. igetintopc.com does not host or upload this material and is not responsible for the content.