Facebook trained AI to fool facial recognition systems, and it works on live video


Facebook is still involved in a multimillion dollar lawsuit for its facial recognition practices, but that has not prevented its artificial intelligence research division from developing technology to combat the misdeeds of which the company is accused. According to VentureBeat Facebook AI Research (FAIR) has developed a state-of-the-art "de-identification" system that works on video, even on live video. It works by altering the key facial characteristics of a video subject in real time through machine learning, to trick a facial recognition system to identify the subject incorrectly.

This de-identification technology has existed in the past and there are entire companies, such as the Israeli AI and the D-ID privacy firm, dedicated to providing it for still images. There is also a whole category of deceptive facial recognition images that you can use yourself, called adverse examples, that work by exploiting weaknesses in the way computer vision software has been trained to identify certain characteristics. Take, for example, this pair of sunglasses with a pattern of printed confrontation that can make a facial recognition system think that you are actress Milla Jovovich.

But that kind of frustration of facial recognition generally means altering a photograph or a still image captured from a security camera or some Another source after the fact. Or, in the case of adverse examples, prepare preventively to deceive the system. Facebook's research supposedly does similar work in real time and in video sequences, both pre-captured and live. That is the first time for the industry, says FAIR, and good enough to combat sophisticated facial recognition systems. You can see an example of this in action in this YouTube video that, because it is excluded from the list, cannot be inserted elsewhere.


gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAQABAAACAkQBADs

Image: Facebook AI Research

"Facial recognition can lead to loss of privacy and facial replacement technology can be misused to create deceptive videos," reads the newspaper explaining the company's approach, as quoted by VentureBeat “Recent global events related to advances and abuse of facial recognition technology invoke the need to understand methods that successfully address deidentification. Our contribution is the only one suitable for video, including live video, and presents a quality that far exceeds the methods of literature. ”

Facebook apparently does not intend to make use of this technology in any of its commercial products, VentureBeat reports. But research can influence future tools developed to protect people's privacy and, as research with "deceptive videos" highlights, prevent someone's image from being used in deep video fakes.


gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAQABAAACAkQBADs

Image: Facebook AI Research

The AI ​​industry is currently working on ways to combat the spread of deep counterfeits and the increasingly sophisticated tools used to create them. This is a method, and both lawmakers and technology companies are trying to find other tools, such as deep counterfeit detection software and regulatory frameworks on how to control the propagation of fake videos, images and audio.

The other concern addressed by FAIR's investigation is facial recognition, which is also unregulated and causes concern among legislators, academics and activists who fear it may violate human rights if it continues to be deployed without supervision by the police, Governments and corporations. .

For More Updates Check out Blog, Windows Softwares Drivers, Antivirus, Ms Office, Graphic Design Don’t Forget to Look Our Facebook Page Get Into Pc like us & follow on Twitter- @getinpc


Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. igetintopc.com does not host or upload this material and is not responsible for the content.


Monitors Sale ad

Open

Close