The developed machine-learning-based tools not only protect video from malicious tampering, but also allow for downgrading and declassifying video by automatically removing classified objects or scenes while maintaining the authenticity of the video. On the sample datasets we tested our algorithm on we achieved a 94% accuracy.
We developed a method to authenticate visual media that can be used to track valid edits and identify tampering. This method can be used to generate keys for media that has been validated by our tampering detection techniques. The other toolset we developed declassifies videos by blurring sensitive information, such as faces and text, and can be used to automatically obscure this information in videos that would be published. We also created tools for detecting deepfake videos, copy move forgery, and spliced images.