Facebook AI Research (FAIR) has developed a state-of-the-art de-identification system that works not only on still-images but also on live video. Using Machine Learning, It alters the key facial features of the object and fools the facial recognization system to identify the object.
The report of VentureBeat further cited the paper for explaining the company’s approach as “Face recognition can lead to loss of privacy and face replacement technology may be misused to create misleading videos. Recent world events concerning the advances in and abuse of face recognition technology invoke the need to understand methods that successfully deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods.”
An adversarial autoencoder with a classifier network is used to train the AI model. Voice or online behavior or any other type of identifiable information of any person can be masked by the autoencoder told Tel Aviv University professor Lior Wolf in a telephonic interview to VentureBeat
At present, Facebook will be unable to use this technology in any of their products but yes, it can be used in the future to save the privacy of individuals and limit someone’s correspondence from being employed in misleading video deepfakes.
The model is a sophisticated one of a kind. many companies, startups and law enforcement agencies are working on tools to detect deepfake videos and limit them from being used.