Η company Intel made a new one tool detection deepfake which uses facial blood flow and is 96% accurate.
The deepfakes έχουν γίνει μία τεχνολογική μάστιγα της εποχή μας. Με την έννοια deepfake εννοούμε την παραγωγή ψεύτικων βίντεο ή φωτογραφιών που συχνά εκθέτουν τα πρόσωπα ανυποψίαστων πολίτων ή και διάσημων. Χρησιμοποιούνται είτε για αστείο, ή για φτηνή advertising, or for revenge. Especially putting the victims face in porn star scenes.
There are many apps that allow users to create deepfakes using AI-based processes. But there is an additional and unpleasant side to this technology. In addition to being used to create fake revenge porn, it has been used by scammers.
But the bigger concern is how deepfakes will lead to the spread of misinformation. Example the fake video of Ukraine surrender by its president Volodymyr Zelensky (Volodymyr Zelensky) shared on social media this year.
Various organizations and private companies, including Facebook, Adobe, Google, have created tools designed to detect deepfakes. A new version of Intel and Intel Labs, which aptly named FakeCatcher, takes a unique approach: blood flow analysis.
Instead of following the usual method of examining a video file for telltale signs, Intel's platform uses deep learning to analyze the subtle color changes in faces caused by the blood flowing through their veins, a procedure called photoplethysmography.
FakeCatcher looks at the blood flow in the pixels of an image, something that deepfakes hasn't perfected yet, and looks at signals from multiple frames. It then determines whether the video in question is real or fake.
Intel says that in conjunction with detection with base eye gaze, the technique can determine whether a video is real within milliseconds and with an accuracy rate of 96%. The company added that the platform uses 3rd generation Xeon Scalable processors with up to 72 concurrent crawl streams and operates through a web interface.
A real-time solution with such a high accuracy rate could make a huge difference in the online war against disinformation. On the other hand, it could also result in deepfakes becoming even more realistic, as creators will always try to trick the systems.