To make a convincing deep fake, a forgery of a video or audio clip generated by the AI, a neuronal model that is trained with a large amount of reference material is usually needed. In general, the larger the photo, video or sound data set, the more disturbingly accurate the result will be. But now, researchers at Samsung's AI Center have devised a method to train a model to animate with an extremely limited data set: just a photo, and the results are surprisingly good.
Researchers can achieve this effect (as discovered by Motherboard by training their algorithm on "landmark" facial features (the general shape of the face, eyes, mouth shape and more) extracted from a public repository of 7,000 celebrity images gathered on YouTube.
From there, you can assign these features in a photo to bring it to life.As the team demonstrates, your model even works on the Mona Lisa, and on other portraits of a single photo In the video, the famous portraits of Albert Einstein, Fyodor Dostoyevsky and Marilyn Monroe come to life as if they were Live Photos on the camera roll of your iPhone.
As with most fake ones, it is very easy to see the seams at this stage.Most faces are surrounded by visual artifacts.However, fixing this component is probably easier compared to the feat of accurately pretending to the Mona Lisa so that It seems like a human being who breathes.
Despite some faults, fake videos and audio are becoming more realistic. If you need more proof, take a look at this strange recreation generated by AI from the voice of Joe Rogan. As researchers continue to develop low-elevation methods to make high-quality counterfeits, there is a concern that they will be used against people in the form of propaganda, or to represent people in situations they would oppose, such as pornographic videos. , which was the original purpose of the software. According to my colleague Russell Brandom, the potential political danger of deep failures is real, but at present the concern is exaggerated