TruthScan Comprehensive AI Detection Glossary

A complete reference guide to AI detection and digital forensics terminology. Search or browse all terms to understand the technology behind content authenticity.

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

A

Acoustic Fingerprint

The unique, measurable characteristics of an audio signal. Just as a human fingerprint is unique, a voice has specific shapes in its sound waves. TruthScan analyzes these to detect voice cloning, looking for the lack of natural breath sounds or the flatness often found in synthetic speech.

Adversarial Attack

Techniques designed to fool AI detection systems by making small, deliberate changes to images or audio. Attackers may add imperceptible perturbations or apply transformations to evade detectors. Robust detection models are trained to resist such evasion attempts.

Artifacts

Digital distortions, pixelation, or visual errors that appear in an image or video file. In the context of AI detection, these are often generative artifacts—illogical details like warped textures, strange blurring, or mismatched resolution blocks that indicate the image was synthesized rather than captured by a camera lens.

B

Blink Rate Analysis

A forensic technique that monitors how frequently and naturally a subject blinks in a video. Humans have a semi-predictable biological blink rhythm. Deepfakes, particularly older or lower-quality ones, often display subjects that blink too rarely, too quickly, or not at all.

C

Chain of Custody

A documented process that tracks who has handled digital evidence and when, ensuring its integrity from capture to analysis. In forensic investigations, maintaining chain of custody is critical to prove that media files were not altered between acquisition and detection.

Clone Detection

A forensic technique that identifies regions of an image that have been duplicated from elsewhere in the same image. Detection algorithms compare pixel blocks and look for statistically identical or near-identical areas, which indicates that an attacker cloned a region to hide or alter content.

Compression Artifacts

Distortions introduced when an image or video is compressed and decompressed (e.g., JPEG, MPEG). These differ from generative artifacts—compression causes blockiness, ringing, or color banding. Forensic analysis can detect re-compression or inconsistent compression across regions.

Confidence Score

A percentage value (0–100%) indicating the system's certainty that a piece of content is AI-generated. A high score (e.g., 99%) represents a definitive detection based on strong evidence, whereas a mid-range score (e.g., 55%) suggests ambiguity or insufficient data.

Content Provenance

Information that traces the origin and edit history of a piece of media. Standards like C2PA (Coalition for Content Provenance and Authenticity) and CAI (Content Authenticity Initiative) enable cryptographic signing so that viewers can verify how content was created and modified.

Copy-Move Forgery

A type of image forgery where a region is copied from one part of an image and pasted elsewhere in the same image. Attackers use copy-move to hide objects, duplicate elements, or create fake scenes. Forensic tools detect it by finding statistically identical pixel blocks.

D

Deepfake

Media in which a person's likeness (face or voice) has been realistically swapped or manipulated using artificial intelligence. Unlike standard AI art, deepfakes are specifically designed to mimic real individuals, often making them say or do things they never did.

Detection Heatmap

A visual reporting tool that overlays a color gradient onto the analyzed image or video frame. Instead of a simple Fake label, the heatmap highlights the specific regions where manipulation was detected (e.g., coloring a face red to indicate a deepfake while leaving the background blue).

Diffusion Patterns

Subtle, noise-like textures specific to Diffusion Models (generative AI tools like Midjourney, DALL-E, or Stable Diffusion). These models generate images by refining static noise, often leaving behind a microscopic, grid-like pattern that is invisible to the naked eye but detectable by forensic algorithms.

Digital Manipulation

Any alteration of a media file using software tools (like Photoshop) or AI algorithms. While simple edits like brightness adjustments are common, forensic manipulation involves substantial changes—such as removing objects, warping facial features, or inserting fake elements—that alter the image's context or truthfulness. The model looks for inconsistencies in lighting, shadows that don't match the light source, or cloned patterns.

E

ELA (Error Level Analysis)

A forensic technique that re-saves an image at a known quality level and compares the resulting compression. Edited or AI-generated regions often show different error levels than the original—areas that were added or altered stand out as brighter or darker in the analysis.

Ensemble Model

A detection strategy that uses multiple different AI models to analyze the same content simultaneously. TruthScan uses an ensemble approach to minimize errors; if one model is specialized in detecting Midjourney images and another in Photoshop edits, using them together ensures broader coverage and accuracy.

Evasion Techniques

Methods used to avoid or reduce the likelihood of detection by AI content detectors. These may include image compression, color adjustment, adding noise, or using adversarial perturbations. Understanding evasion helps improve detector robustness.

EXIF Information

A specific standard for storing metadata in image files. It acts as a digital logbook, recording technical details such as the camera make/model, shutter speed, focal length, date, and sometimes GPS coordinates. The model looks for discrepancies, such as an image claiming to be from an iPhone but lacking the specific software tags an iPhone always adds.

F

Face Morphing

A technique that blends two or more faces into a single, smooth transition. Used in identity fraud and synthetic persona creation, morphed faces can evade recognition systems. Detection looks for inconsistent facial landmarks or blending artifacts.

Face Swap

A specific type of deepfake that replaces one person's face with another in video or images. Unlike full face synthesis, face swap transplants an existing face onto a different body or scene. Detection focuses on boundary artifacts, lighting mismatches, and unnatural facial movements.

False Negatives

An error where the detection system fails to identify an AI-generated or manipulated image, incorrectly labeling it as Real or Human. This typically happens with highly sophisticated next-gen AI models or images that have been heavily compressed to hide their digital fingerprints.

False Positive

An instance where the detection system incorrectly identifies authentic, human-made content as AI-generated. While systems are designed to minimize this, false positives can occasionally be triggered by heavy compression, extreme filtering, or low-resolution files.

G

GAN (Generative Adversarial Network)

An AI architecture where two neural networks compete: a generator creates fake content and a discriminator tries to detect it. Many early deepfakes and synthetic images used GANs. Detection models are trained to recognize the distinctive artifacts GANs tend to produce.

Ghosting / Ghosted Image

A visual anomaly where an object appears to have a faint, translucent duplicate or trail visible nearby. This is a common failure in AI generation when the model attempts to render moving objects or complex geometries, resulting in a blurry, double-exposure effect.

H

Hallucination

When an AI model generates plausible-looking details that are factually wrong, inconsistent, or nonsensical. In images, this may manifest as extra fingers, impossible objects, or logically incoherent backgrounds. Detection can identify these semantic or structural impossibilities.

Hashing

A digital method of creating a unique fingerprint for a file. If even a single pixel in an image is changed, its hash value changes completely. TruthScan uses hashing to check if a file matches a database of known deepfakes or confirmed distinct originals.

I

Inpainting

The process of using AI to generate or replace only a specific part of an existing image while preserving the surrounding area. Detection tools look for the seams or statistical inconsistencies where the AI-generated patch meets the original, organic photograph.

J

JPEG Ghosts

Faint, invisible outlines that appear when an image has been saved multiple times at different quality levels. The model detects these mismatched compression signatures to find inserted objects or manipulated areas.

K

Keyframe

A complete, standalone image within a video file. Most video frames only record changes from the previous moment to save space, but keyframes capture the full picture. We analyze keyframes for higher-resolution artifacts that might be blurred out in the rest of the video.

L

Lip-Sync Drift

A temporal artifact in videos where the movement of the mouth does not perfectly align with the audio track. Even a mismatch of a few milliseconds can be a strong indicator of a deepfake, where an audio track is driving a computer-generated face.

Liveness Detection

A security process used to verify that the person behind a camera is a real, breathing human present at that moment. The model looks for micro-movements like heartbeat-induced skin flushing and natural blinking that AI generators often fail to replicate.

M

Metadata

Hidden data about data embedded within a digital file, containing details such as the camera model, capture date, and GPS location. AI generators often strip this data or insert their own tags. Forensic analysis examines the structure of this metadata to verify if it matches the alleged source device.

Model Fingerprinting

The process of identifying which AI model or generator created a piece of content. Different models leave distinct statistical or structural signatures. Forensic tools can sometimes attribute synthetic content to specific vendors like Midjourney, DALL-E, or Stable Diffusion.

N

Neural Rendering

AI-driven techniques that synthesize or manipulate images and video using learned representations. This includes neural radiance fields (NeRF), neural texture synthesis, and similar methods. Detection looks for the characteristic artifacts these approaches produce.

Noise

The random, grainy texture found in all digital photographs, similar to static on a TV. In real photos, this noise is caused by the camera sensor's sensitivity to light and follows a specific, uniform pattern. The model looks for clean spots in a noisy image or repetitive grid-like noise patterns typical of AI generation.

O

Outpainting

The AI process of extending an image beyond its original borders, generating new content to fill the expanded canvas. Unlike inpainting, which fills holes within an image, outpainting adds peripheral content. Detection looks for seams or inconsistencies where generated extensions meet the original frame.

Over-smoothing

A visual characteristic where textures (like skin or fabric) lack natural detail and appear plastic or waxy. AI models often average out complex details they don't understand, resulting in skin that looks like it has been heavily airbrushed.

P

Pixel-Level Inconsistencies

Irregularities in the relationship between adjacent pixels that are generally invisible to the naked eye. Cameras create images with a specific, natural noise pattern; AI generators often arrange pixels with a mathematical precision or statistical fingerprint that differs from natural optical capture.

Q

Quantization Tables

The specific mathematical recipe a camera or software uses to compress an image. If an image claims to be from a Nikon camera but uses the quantization table of Adobe Photoshop, it is a strong indicator that the image was edited or generated post-capture.

R

Recapture Attack

A fraud technique where a user displays a deepfake or static photo on a screen and then uses a second camera to film that screen. The model looks for moiré patterns, screen glare, or the pixel grid of the monitor device.

S

Source Attribution

Determining which device, software, or AI model produced a piece of content. This may involve analyzing metadata, compression fingerprints, or model-specific artifacts. Source attribution helps verify authenticity and trace manipulation.

Specular Highlights

The reflection of light sources visible on shiny surfaces, particularly the eye shine in a subject's eyes. In a real photo, the shape and position of these reflections match the environment (e.g., a square window). AI often renders these incorrectly, creating mismatched or geometrically impossible reflections.

Splicing

A form of image forgery where regions from different source images are combined into a single composite. Also called compositing or cut-and-paste. Detection looks for lighting, perspective, or texture inconsistencies between the spliced regions.

Style Transfer

An AI technique that applies the visual style of one image (e.g., a painting) to the content of another. While often used for artistic effect, it can also obscure manipulation. Detection may identify unnatural style boundaries or statistical fingerprints of style-transfer models.

Synthetic Persona

A photorealistic digital identity—including a face and profile—created entirely by AI. Unlike a deepfake, which mimics a real person, a synthetic persona does not correspond to any human being. These are frequently used to create undetectable profiles for social media bots or fraud.

T

Tampering

The intentional alteration of digital media to deceive viewers. Broader than digital manipulation, tampering encompasses any change meant to misrepresent reality—from splicing images to altering metadata. Detection systems are designed to identify signs of tampering.

Temporal Artifacts

Visual inconsistencies that appear over time rather than in a single frame. In AI videos, elements like skin tone, glasses, or background textures may flicker, vibrate, or shift unnaturally from frame to frame as the AI struggles to maintain temporal consistency.

Texture

The surface quality and fine details of an object in an image, such as skin pores, fabric weaves, paper grain, or individual strands of hair. The model looks for unnatural smoothness or plastic-looking surfaces, as AI often over-homogenizes these complex details.

TTS (Text-to-Speech)

Technology that converts written text into synthetic speech. While voice cloning mimics a specific person, general TTS creates new voices from scratch. Both can be used for synthetic audio. Detection looks for robotic cadence, unnatural pauses, or frequency artifacts.

U

Upscaling Artifacts

Distortions introduced when an AI attempts to increase the resolution of a blurry or small image. The model looks for paradoxical details, such as very sharp eyelashes next to blurry skin pores, suggesting AI hallucinated new details on a low-quality original.

V

Voice Cloning

The use of AI to generate a synthetic voice that mimics a specific person's tone, accent, and cadence. The model looks for robotic artifacts in the breathing patterns or frequency gaps that human vocal cords cannot produce.

Voiceprint

A biometric identifier derived from the unique characteristics of a person's voice—similar to an acoustic fingerprint. Voiceprints can be used for authentication or to detect when synthetic speech is impersonating a known voice.

W

Watermarking (Digital)

Invisible signals embedded into media files by responsible AI generators (like Google's SynthID) to label them as AI-created. TruthScan scans for these specific, hidden vendor signatures that confirm an image is AI-generated.

X

XMP Metadata

Extensible Metadata Platform. A standard used by editing software (like Adobe) to store history logs inside an image file. The model looks for traces of editing history, such as logs saying 'Created with Stable Diffusion' even if the main EXIF data was wiped.

Y

Y-Channel Analysis

Analyzing the Luminance (brightness) layer of an image separately from its color. AI often gets the color right but messes up the lighting physics. By stripping away color and looking only at the grayscale Y-Channel, these lighting inconsistencies become much more obvious.

Z

Zero-Shot Detection

The ability of a detection model to identify a fake from a new AI generator it has never seen before. The model looks for universal errors—like lighting physics violations—that all current AI models tend to make, regardless of who built them.