A college research project that uses deep learning and computer vision to detect manipulated or deepfake images and videos — helping fight misinformation and digital fraud.
Deepfake technology has become increasingly sophisticated, making it difficult to distinguish real media from manipulated content. Our challenge was to build a reliable detection system that could analyze images and video frames for signs of AI manipulation.
The project aimed to create an accessible tool that could be used by anyone — from journalists to everyday users — to verify the authenticity of digital media.
We trained a convolutional neural network (CNN) on thousands of real and deepfake samples, optimizing for accuracy and speed. The model analyzes facial inconsistencies, lighting artifacts, and pixel-level anomalies to make predictions.
Custom CNN trained on real and deepfake datasets with high accuracy in detecting manipulated facial features.
Frame-by-frame analysis of uploaded images detecting inconsistencies in lighting, shadows, and facial geometry.
Each analysis returns a confidence percentage indicating the likelihood of the media being manipulated.
Optimized the model using Tensorflow or Opencv for faster inference times and detailed performance tracking using precision-recall curves.