Earlier this month, you might have seen a round in the website ThisPersonDoesNotExist.com that uses artificial intelligence to create a stunningly realistic fake face. Well, here's the sequel: WhichFaceIsReal.com. You can test your ability to distinguish between AI creation fake and real articles. Go to the site and click on who you think is a real person!
WhichFaceIsReal.com also has a high purpose. It studies how two scholars at the University of Washington, Jevin West and Carl Bergstrom disseminate information in society. They think that the emergence of fake artificial intelligence can be a problem, weakening trust in society's trust, and educating the public.
"When this new technology emerges, the most dangerous time is when technology is there, but the public does not recognize it," Bergstrom says. The Verge . "That's when it can be used most effectively."
"What we want to do is let people know that this technology is out there and educate the public." Eventually, most people find that they can use Photoshop images.
Both sites use machine learning methods known as generative hostile networks (or briefly GANs), which generate fakes. These networks are a huge amount
The reason why the GAN is so good is that they try to do it on their own. . One part of the network generates faces and the other part is compared with training data. If you can see the benefits, the generator is sent back to the drawing board to improve the work. Think of it like a strict art teacher who will not let you leave the class until you draw the right eye number on your charcoal portrait.
These technologies can be used to manipulate images as well as audio and video. There is a limit to what you can do (you can not enter captions for pictures you want to exist, but you can not turn them into existences). Deepfakes can turn politicians' videos into dolls, and even you You can turn it into a great dancer. If the face after the terrorist attacks can spread malicious use this one with incorrect information. For example, AI can be used to create a fake killer circulated online, spread on social networks.
In these scenarios, journalists typically want to use a tool such as Google's image reversal search to determine the source of an image. But that does not work on AI fakes. "If you want to inject the wrong information into such situations, posting the offender's picture will fix someone else very quickly," Bergstrom says. "But if you use a picture of someone who does not exist at all?"
they mention that scholars and researchers are developing a number of tools to find deep peices. "It's easy to understand right now," West concludes, "and you know that the above tests can distinguish between AI-generated faces and real people: asymmetric faces, misaligned teeth, unrealistic hairs,
But this fake will get better, after three years [these fakes] you will not be able to tell, "West says. And when that happens, what you know will be half of the war. Berkstrom says, "Our message is that people do not have to believe anything. Our message is the opposite, it is not prudent."