A new deepfake detector that claims almost immediate, highly accurate results has been revealed by Intel, using what the company believes is a much more reliable method than reverse engineering. What FakeCatcher does differently, which Intel believes makes it more unique, is to look at the video itself rather than the raw data behind it. This new approach analyzes potential deepfakes in real-time, using various tools like face detection AI models and algorithms. It also looks for (and can allegedly detect) “blood flow” through subtle changes in pixels, which occur as our blood naturally flows. These pulses are picked up by FakeCatcher, translated into visible maps, then checked for inconsistencies, all within a very short time. Real-time analysis is another aspect Intel believes will make a big difference, as other deepfake detectors can take hours to process and often require uploading questionable media first. Intel hopes that FakeCatcher will help prevent businesses, social media platforms, and individual users from falling for deepfakes in the future, whether it’s through preventing deepfakes from being uploaded or flagging these videos before they’re potentially shared. Intel Senior Staff Research Scientist, Ilke Demir, will share more information about FakeCatcher and its potential in a Twitter Spaces event this Wednesday, November 16, at 2:30 pm ET.