If you've been following tech news for the past year, you've probably heard about Deepfakes, the widely available system that works through machine learning to exchange faces and videos of medical care. First reported by Motherboard at the end of 2017, the technology seemed a terrifying omen after years of disconcerting disinformation campaigns. The deep panic spread more and more in the months that followed, with alarming articles from Buzzfeed (several times), The Washington Post (several times), and The New York Times (several more times). It is not an exaggeration to say that many of the most outstanding writers and publications of journalism passed 2018 telling us that this technology was an imminent threat to public discourse, if not the truth itself.
More recently, that alarm has been extended to Congress. Senator Ben Sasse (R-NE) is currently promoting a bill to ban the use of technology, describing it as "something that keeps the intelligence community at night." To hear Sasse say it, this video manipulation software is dangerous. a geopolitical scale, which requires fast and decisive action on the part of Congress.
But more than a year after the first falsifications appeared on Reddit, that threat has not materialized. We have seen many public demonstrations, most notably a video from Buzzfeed in which Jordan Peele posed as former President Obama, but journalists seem more tech savvy than trolls. Twitter and Facebook have unmasked tens of thousands of fake accounts of trolls campaigns, but so far, those fake accounts have not produced a single video of deepfake. The closest we've seen is a short-lived anti-Trump video in Belgium, but it was more of a confusing political advertisement than a campaign of chaos. (It was publicly sponsored by a well-known political group, for example, and was made with After Effects). The expected wave of political errors has not materialized, and increasingly, the panic around the AI-assisted propaganda seems a false alarm.  Silence is particularly severe because political trolls have never been more active. During the time that deepfake technology has been available, disinformation campaigns have been directed at the French elections, at Mueller's research and, more recently, at the Democratic primary. The sectarian riots in Sri Lanka and Myanmar were fueled by false stories and rumors, often deliberately fabricated to stoke hatred against opposing groups. The Troll campaigns of Russia, Iran and Saudi Arabia have been unleashed through Twitter, trying to silence the opposition and confuse the opponents.
In any of these cases, the attackers had the motive and the resources to produce a deep video. The technology is cheap, easily available and technically simple. But given the option to make video evidence, each group seems to have decided that it was not worth it. Instead, we saw news articles made with whole fabrics, or videos edited to acquire a sinister meaning.
It's a good question why DeepFakes has not become a propaganda technique. Part of the problem is that they are too easy to trace. The existing DeepFake architectures leave predictable artifacts in the recorded video, which are easy to detect by an automatic learning algorithm. Some detection algorithms are publicly available, and Facebook has been using its own proprietary system to filter recorded videos since September. These systems are not perfect, and new filters dodging architectures appear regularly. (There is also the serious political problem of what to do when a video activates the filter, since Facebook has not been willing to impose a general ban).
But even with the limitations of deepfake filters, they might be enough to drive political trolls away from the tactic. It is likely that the loading of an algorithmically manipulated video attracts the attention of automated filters, while the conventional film editing and the obvious lies will not. Why risk?
It is also not clear how useful deepfakes are for this type of troll campaign. Most of the operations we have seen so far have been more to muddy the water than to produce convincing evidence of a claim. In 2016, one of the crudest examples of false news was the report fed by Facebook that Pope Francis had backed Donald Trump. It was widely shared and completely false, the perfect example of false news gets out of hand. But the fake story offered no real evidence of the claim, just a superficial article on an otherwise unknown website. It was not harmful because it was convincing; The people just wanted to believe it. If you already think that Donald Trump is leading America to the path of Christ, it will not take long to convince him that the Pope also believes it. If you are skeptical, a video from a papal address will probably not change your mind.
That reveals some uncomfortable truths about the media and why EE. UU I was so susceptible to this kind of manipulation in the first place. Sometimes we think that these trolls campaigns are the informative equivalent of food poisoning: bad contributions to a credulous but basically rational system. But politics is more tribal than that, and news does much more than simply convey information. Most of the troll campaigns focused on affiliations instead of information, which led audiences to increasingly fragmented fields. The video does not help with that; in any case, it hurts to base the conversation on disposable facts.
Real damage is still being done with deep falsification techniques, but it is happening in pornography, not in politics. That's where the technology started: the initial story of Motherboard on Deepfakes was about a Reddit user who pasted the face of Gal Gadot on the body of a porn actress. Since then, the most stall sites on the web have continued to insert women into sex images without consent. It is an ugly and harmful thing, particularly for women who are victims of harassment campaigns. But most of the deep coverage has treated pornography as an embarrassing spectacle to protect political discourse. If the problem is pornography without consent, then the solution is more focused on individual stalkers and targets, rather than the general ban proposed by Sasse. It also suggests that the deep story is about misogynist harassment rather than geopolitical intrigue, with less obvious implications for national politics.
Some might argue that the deep counterfeit revolution has not yet happened. Like any technology, video doctoring programs become a little more sophisticated every year. The next version could always solve any problem that is holding you back. As long as there are bad actors and tools available, defenders say, eventually the two will overlap. The underlying logic is so convincing; It is only a matter of time before reality is brought up to date.
You may be right. A new wave of political depths could emerge tomorrow to show that I am wrong, but I am skeptical. We have had the tools to make videos and photos for a long time. It has even been used in political campaigns before, especially in a fake photo of John Kerry circulated during the 2004 campaign. Artificial intelligence tools can make the process easier and more accessible, but it is already easy and accessible. As the innumerable demonstrations showed, Deepfakes is now within reach of anyone who wants to cause problems on the Internet. It's not that the technology is not ready yet. It just is not useful.