Intel has developed a new deepfake detection tool that uses facial blood flow and is 96% accurate.
Deepfakes have become a technological scourge of our time. By deepfake we mean the production of fake videos or photos that often expose the faces of unsuspecting citizens or even celebrities. They are used either for a joke, or for cheap advertising, or for revenge. Especially putting the victims face in porn star scenes.
There are many apps that allow users to create deepfakes using AI-based processes. But there is an additional and unpleasant side to this technology. In addition to being used to create fake revenge porn, it has been used by scammers.
But the bigger concern is how deepfakes will lead to the spread of misinformation. Example the fake video of Ukraine surrender by its president Volodymyr Zelensky (Volodymyr Zelensky) shared on social media this year.
Various organizations and private companies, including Facebook, Adobe, Google, have created tools designed to detect deepfakes. A new version of Intel and Intel Labs, which aptly named FakeCatcher, takes a unique approach: blood flow analysis.
Instead of following the usual method of examining a video file for telltale signs, Intel's platform uses deep learning to analyze subtle color changes in faces caused by the blood flowing through their veins, a process called photoplethysmography .
FakeCatcher looks at the blood flow in the pixels of an image, something that deepfakes hasn't perfected yet, and looks at signals from multiple frames. It then determines whether the video in question is real or fake.
Intel says that combined with eye gaze detection, the technique can determine whether a video is real within milliseconds and with an accuracy rate of 96%. The company added that the platform uses 3rd generation Xeon Scalable processors with up to 72 concurrent crawl streams and operates through a web interface.
A real-time solution with such a high accuracy rate could make a huge difference in the online war against disinformation. On the other hand, it could also result in deepfakes becoming even more realistic, as creators will always try to trick the systems.