However fun and entertaining the deepfake technology may appear, concerns have arisen that it could be used for malicious purposes, such as spreading misleading or defamatory information.
Last year saw the rise of the threat of deepfakes – a technique used to combine and superimpose images and videos onto others using a machine learning algorithm, creating hyper-realistic but fake content.
AI buffs have split into two major groups – one working to make such images and video more realistic, and another developing tools that would tell users whether a video has been manipulated or not.
A team of researchers from the University of California at Riverside and the R&D firm Mayachitra have developed a novel deep-learning architecture that can detect content-changing manipulation.
This is not the first study on the problem, but this neural network appears to have gone further in recognising deepfakes than its predecessors.
Different manipulation techniques may create a convincing video for human eyes, but the algorithm is able to see minor distortions, such as shearing and compression.
It exploits resampling features, a long short-term memory (LSTM) based network, and encoder-decoder architectures in order to analyse videos pixel by pixel, and is said to be capable of spotting whole patches of the footage that have been doctored.
“Many [previous] deepfake detectors rely on visual quirks in the faked video, like inconsistent lip movement or weird head pose,” Brian Hosler, a member of the project, told Digital Trends.
“However, researchers are getting better and better at ironing out these visual cues when creating deepfakes. Our system uses statistical correlations in the pixels of a video to identify the camera that captured it. A deepfake video is unlikely to have the same statistical correlations in the fake part of the video as in the real part, and this inconsistency could be used to detect fake content.”
“We plan to release a version of our code, or even an application, to the public so that anyone can take a video, and try to identify the camera model of origin,” Hosler said. “The tools we make, and that researchers in our field make, are often open-source and freely distributed.”
It is not only scientists that are worried about the spread of viral deepfakes.
In late June, an anonymous developer presented the app DeepNude, which could turn female photos into nudes in a couple of clicks. The app shut down soon afterwards but is currenty up for grabs at an online auction.
Also in June, US lawmakers introduced a bipartisan bill, titled the Deepfake Report Act of 2019, which would require the Department of Homeland Security to conduct an annual study of deepfakes and similar content.
It also proposes to direct law enforcement officials to assess the technologies used to create deepfakes and propose countermeasures.
“Artificial intelligence presents enormous opportunities for improving the world around us but also poses serious challenges,” said bill co-sponsor, Senator Cory Gardner.
“Deepfakes can be used to manipulate reality and spread misinformation quickly. In an era where we have more information available at our fingertips than ever, we have to be vigilant about making sure that information is reliable and true in whichever form it takes. The United States needs to have a better understanding of how to approach the issues with technologies like deepfakes and this bipartisan legislation is a crucial step in that direction.”
Sourse: sputniknews.com