TruthScan Comprehensive AI Detection Glossary
A complete reference guide to AI detection and digital forensics terminology. Search or browse all terms to understand the technology behind content authenticity.
Acoustic Fingerprint
The unique, measurable characteristics of an audio signal. Just as a human fingerprint is unique, a voice has specific shapes in its sound waves. TruthScan analyzes these to detect voice cloning, looking for the lack of natural breath sounds or the flatness often found in synthetic speech.
Artifacts
Digital distortions, pixelation, or visual errors that appear in an image or video file. In the context of AI detection, these are often generative artifacts—illogical details like warped textures, strange blurring, or mismatched resolution blocks that indicate the image was synthesized rather than captured by a camera lens.
Blink Rate Analysis
A forensic technique that monitors how frequently and naturally a subject blinks in a video. Humans have a semi-predictable biological blink rhythm. Deepfakes, particularly older or lower-quality ones, often display subjects that blink too rarely, too quickly, or not at all.
Confidence Score
A percentage value (0–100%) indicating the system's certainty that a piece of content is AI-generated. A high score (e.g., 99%) represents a definitive detection based on strong evidence, whereas a mid-range score (e.g., 55%) suggests ambiguity or insufficient data.
Deepfake
Media in which a person's likeness (face or voice) has been realistically swapped or manipulated using artificial intelligence. Unlike standard AI art, deepfakes are specifically designed to mimic real individuals, often making them say or do things they never did.
Detection Heatmap
A visual reporting tool that overlays a color gradient onto the analyzed image or video frame. Instead of a simple Fake label, the heatmap highlights the specific regions where manipulation was detected (e.g., coloring a face red to indicate a deepfake while leaving the background blue).
Diffusion Patterns
Subtle, noise-like textures specific to Diffusion Models (generative AI tools like Midjourney, DALL-E, or Stable Diffusion). These models generate images by refining static noise, often leaving behind a microscopic, grid-like pattern that is invisible to the naked eye but detectable by forensic algorithms.
Digital Manipulation
Any alteration of a media file using software tools (like Photoshop) or AI algorithms. While simple edits like brightness adjustments are common, forensic manipulation involves substantial changes—such as removing objects, warping facial features, or inserting fake elements—that alter the image's context or truthfulness. The model looks for inconsistencies in lighting, shadows that don't match the light source, or cloned patterns.
Ensemble Model
A detection strategy that uses multiple different AI models to analyze the same content simultaneously. TruthScan uses an ensemble approach to minimize errors; if one model is specialized in detecting Midjourney images and another in Photoshop edits, using them together ensures broader coverage and accuracy.
EXIF Information
A specific standard for storing metadata in image files. It acts as a digital logbook, recording technical details such as the camera make/model, shutter speed, focal length, date, and sometimes GPS coordinates. The model looks for discrepancies, such as an image claiming to be from an iPhone but lacking the specific software tags an iPhone always adds.
False Negatives
An error where the detection system fails to identify an AI-generated or manipulated image, incorrectly labeling it as Real or Human. This typically happens with highly sophisticated next-gen AI models or images that have been heavily compressed to hide their digital fingerprints.
False Positive
An instance where the detection system incorrectly identifies authentic, human-made content as AI-generated. While systems are designed to minimize this, false positives can occasionally be triggered by heavy compression, extreme filtering, or low-resolution files.
Ghosting / Ghosted Image
A visual anomaly where an object appears to have a faint, translucent duplicate or trail visible nearby. This is a common failure in AI generation when the model attempts to render moving objects or complex geometries, resulting in a blurry, double-exposure effect.
Hashing
A digital method of creating a unique fingerprint for a file. If even a single pixel in an image is changed, its hash value changes completely. TruthScan uses hashing to check if a file matches a database of known deepfakes or confirmed distinct originals.
Inpainting
The process of using AI to generate or replace only a specific part of an existing image while preserving the surrounding area. Detection tools look for the seams or statistical inconsistencies where the AI-generated patch meets the original, organic photograph.
JPEG Ghosts
Faint, invisible outlines that appear when an image has been saved multiple times at different quality levels. The model detects these mismatched compression signatures to find inserted objects or manipulated areas.
Keyframe
A complete, standalone image within a video file. Most video frames only record changes from the previous moment to save space, but keyframes capture the full picture. We analyze keyframes for higher-resolution artifacts that might be blurred out in the rest of the video.
Lip-Sync Drift
A temporal artifact in videos where the movement of the mouth does not perfectly align with the audio track. Even a mismatch of a few milliseconds can be a strong indicator of a deepfake, where an audio track is driving a computer-generated face.
Liveness Detection
A security process used to verify that the person behind a camera is a real, breathing human present at that moment. The model looks for micro-movements like heartbeat-induced skin flushing and natural blinking that AI generators often fail to replicate.
Metadata
Hidden data about data embedded within a digital file, containing details such as the camera model, capture date, and GPS location. AI generators often strip this data or insert their own tags. Forensic analysis examines the structure of this metadata to verify if it matches the alleged source device.
Noise
The random, grainy texture found in all digital photographs, similar to static on a TV. In real photos, this noise is caused by the camera sensor's sensitivity to light and follows a specific, uniform pattern. The model looks for clean spots in a noisy image or repetitive grid-like noise patterns typical of AI generation.
Over-smoothing
A visual characteristic where textures (like skin or fabric) lack natural detail and appear plastic or waxy. AI models often average out complex details they don't understand, resulting in skin that looks like it has been heavily airbrushed.
Pixel-Level Inconsistencies
Irregularities in the relationship between adjacent pixels that are generally invisible to the naked eye. Cameras create images with a specific, natural noise pattern; AI generators often arrange pixels with a mathematical precision or statistical fingerprint that differs from natural optical capture.
Quantization Tables
The specific mathematical recipe a camera or software uses to compress an image. If an image claims to be from a Nikon camera but uses the quantization table of Adobe Photoshop, it is a strong indicator that the image was edited or generated post-capture.
Recapture Attack
A fraud technique where a user displays a deepfake or static photo on a screen and then uses a second camera to film that screen. The model looks for moiré patterns, screen glare, or the pixel grid of the monitor device.
Specular Highlights
The reflection of light sources visible on shiny surfaces, particularly the eye shine in a subject's eyes. In a real photo, the shape and position of these reflections match the environment (e.g., a square window). AI often renders these incorrectly, creating mismatched or geometrically impossible reflections.
Synthetic Persona
A photorealistic digital identity—including a face and profile—created entirely by AI. Unlike a deepfake, which mimics a real person, a synthetic persona does not correspond to any human being. These are frequently used to create undetectable profiles for social media bots or fraud.
Temporal Artifacts
Visual inconsistencies that appear over time rather than in a single frame. In AI videos, elements like skin tone, glasses, or background textures may flicker, vibrate, or shift unnaturally from frame to frame as the AI struggles to maintain temporal consistency.
Texture
The surface quality and fine details of an object in an image, such as skin pores, fabric weaves, paper grain, or individual strands of hair. The model looks for unnatural smoothness or plastic-looking surfaces, as AI often over-homogenizes these complex details.
Upscaling Artifacts
Distortions introduced when an AI attempts to increase the resolution of a blurry or small image. The model looks for paradoxical details, such as very sharp eyelashes next to blurry skin pores, suggesting AI hallucinated new details on a low-quality original.
Voice Cloning
The use of AI to generate a synthetic voice that mimics a specific person's tone, accent, and cadence. The model looks for robotic artifacts in the breathing patterns or frequency gaps that human vocal cords cannot produce.
Watermarking (Digital)
Invisible signals embedded into media files by responsible AI generators (like Google's SynthID) to label them as AI-created. TruthScan scans for these specific, hidden vendor signatures that confirm an image is AI-generated.
XMP Metadata
Extensible Metadata Platform. A standard used by editing software (like Adobe) to store history logs inside an image file. The model looks for traces of editing history, such as logs saying 'Created with Stable Diffusion' even if the main EXIF data was wiped.
Y-Channel Analysis
Analyzing the Luminance (brightness) layer of an image separately from its color. AI often gets the color right but messes up the lighting physics. By stripping away color and looking only at the grayscale Y-Channel, these lighting inconsistencies become much more obvious.
Zero-Shot Detection
The ability of a detection model to identify a fake from a new AI generator it has never seen before. The model looks for universal errors—like lighting physics violations—that all current AI models tend to make, regardless of who built them.