> Skip to main content
← Insights Deception Detection · Technology

AI Deception Detection:
How Facial Analysis Replaces the Polygraph

Polygraph misses nearly half of genuine deception events. AI facial Action Unit analysis detects signals that cannot be voluntarily suppressed — measuring the 44 involuntary muscle movements that accompany stress and concealment in real time, without contact sensors.

The polygraph's fundamental flaw

Polygraph — the standard tool for deception detection in security vetting for over 60 years — measures physiological arousal: cardiovascular activity, respiratory patterns, galvanic skin response. These signals correlate with stress. They do not distinguish between stress caused by deception and stress caused by the examination environment itself.

More critically, these signals are accessible to conscious modulation. Countermeasure techniques — controlled breathing, targeted muscle tension, mental imagery — are documented, widely known, and effective at suppressing the physiological patterns polygraph is designed to detect. Independent research consistently cites field false negative rates of 40–47%. The most motivated subjects — trained intelligence operatives, ideologically committed insiders — are precisely those who practice countermeasures.

Why facial Action Units cannot be counterfeited

Involuntary facial muscle movements — specifically the 44 Action Units mapped by Paul Ekman's Facial Action Coding System — operate on a different timescale to physiological arousal. They occur within 200–400 milliseconds of an emotional stimulus, faster than conscious awareness and far faster than voluntary suppression can intervene.

A subject experiencing stress or concealment will produce specific AU patterns — AU4 (brow lowerer), AU7 (lid tightener), AU14 (dimpler) combinations that accompany concealed negative affect — regardless of whether they are simultaneously suppressing galvanic skin response or maintaining controlled breathing. The neural pathways are different. The countermeasure window does not exist.

EchoDepth measures all 44 FACS-compliant AUs per frame in real time, mapping the pattern to Valence, Arousal, and Dominance state. The output is Confidence (the degree to which positive signals predominate), Instability (frame-to-frame variance indicating suppression effort), and Net Confidence — a composite signal that captures the gap between performed composure and genuine emotional state.

Deployment in security environments

EchoDepth operates on standard IP cameras and webcams without contact sensors, wearables, or hardware modifications. In a SCIF-compatible configuration, all processing occurs on-premise with no external network dependency. Video is processed in real time; no biometric data is retained post-session unless explicitly configured for evidential logging.

Integration with existing SIEM and SOAR platforms is via structured evidential output — timestamped AU readings, Net Confidence timeline, and flagged interval annotations. The system does not produce a verdict; it produces a calibrated signal that informs human review.

Legal and evidential framework

AI deception detection output is appropriately characterised as a risk signal, not a determination of guilt. It is designed to inform the allocation of human investigative resource — flagging individuals for enhanced interview, extended vetting, or access restriction review — not to replace the legal and procedural safeguards that apply to security vetting decisions.

This framing is consistent with UK GDPR requirements for automated decision-making: EchoDepth provides decision support, with a human decision-maker retaining authority over all consequential determinations.

Related capability

Deception detection for defence, intelligence, and security vetting

FACS-based AU analysis. No contact. No countermeasure vulnerability. Structured evidential output for human review.

Deception Detection Solution Request a Technical Briefing