The evolution of artificial intelligence is at an alarming rate, making it difficult to keep track. However, where the progress of the face generated by artificial intelligence is obsolete, the neural network that creates fake images is used. In a nutshell, we know it.
You can see the progress of the artificial intelligence image generation for 4 years from the image above. The coarse black and white face on the left was published in 2014, published as part of a groundbreaking paper incorporating an artificial intelligence tool known as the Generation Reduced Network (GAN). The colors on the right are from the paper published earlier this month, using the same basic method but clearly different from the picture quality.
This realistic face is the work of NVIDIA researchers. Last week I will explain how to modify the default GAN architecture to make this image from a publicly published paper. See the picture below. Can you tell the difference if you did not know they were fake?
Of particular interest is that this fake face is also easily customizable. Nvidia's engineers have introduced a method called style transfer in which an image's characteristics are mixed with other images. In recent years, a variety of image filters that are widely used in apps like Prisma and Facebook can make them look like impressionist or cubist works.
Nvidia 's researchers apply facial transitions to customize faces with an impressive degree. In the grid below, you can actually see this. The original image (top row) of the actual person has the face attribute of the other person (right column). It combines features like skin and hair to make it look like a whole new person in the process.
Of course, the ability to create a realistic artificial intelligence face causes problems. (How long does it take for an inventory model to disappear?) Experts have been raising awareness over the last two years about how artificial intelligence exclusion can affect society. These tools can be used for misinformation and propaganda and can undermine public confidence in the tendency to distort political as well as judicial systems. (Unfortunately, this was not discussed in the Nvidia paper, and when I contacted the company, I told them I could not talk about it until the peer review was done.
Ignore these warnings. There are always people who seem to use deepfakes to make obscene pornography but are willing to use these tools in a questionable way, but at the same time, fate is not the end of information, Has been of particular interest in the artificial intelligence community, and you can not be an image doctor in the way you like with the same fidelity. There are serious limitations in terms of expertise and time, and researchers at Nvidia On 8 Tesla GPUs a week
There are clues that can be found to find fakes In a recent blog post, artist and coder Kyle McDonald has spoken a lot, for example, It is very difficult to make a fake, it is often seen as too regular as it is painted with brush, too blurred or mixed with someone else's face. Similarly, AI generators do not understand human facial symmetry.
Read the beginning of this article , These hints will probably not big up. Eventually Nvidia study shows that the ongoing AI is much faster in this area take long until the researchers created an algorithm that can avoid these instructions.
Thankfully, experts are already thinking about a new way to authenticate digital photos. Several solutions have already been released, such as checking the time and place taken, such as camera apps that take pictures with geocodes. Certainly AI fakery and image authentication will be a decade-long battle. And now AI is decisively leading the lead.