EchoDepth vs
Existing Approaches
Polygraph, UEBA, and wearable sensors each address a different part of the human risk problem. None of them reads the 44 involuntary facial Action Units that FACS research has correlated with stress, deception, and cognitive overload. EchoDepth does.
EchoDepth by Cavefish Ltd · Cardiff, Wales · Camera-based emotion recognition AI
Full Capability Matrix
How EchoDepth stacks up across the capabilities that matter most to defence, intelligence, and security procurement teams.
| Capability | Polygraph | UEBA / SIEM | Wearable Sensors | EchoDepth |
|---|---|---|---|---|
| Reads emotional / cognitive state | ⚠ Indirect (anxiety proxy) | ✗ Digital events only | ⚠ Physiological only | ✓ 44 FACS Action Units, VAD output |
| No contact / no wearables required | ✗ Physical sensors attached | ✓ | ✗ Worn device required | ✓ Camera only |
| Works on existing cameras / infrastructure | ✗ | ✗ | ✗ New hardware required | ✓ Existing CCTV / webcam |
| Real-time continuous monitoring | ✗ Session / examiner only | ⚠ Digital events only | ✓ Physiological stream | ✓ ~700ms latency, continuous AU stream |
| Air-gap / SCIF deployable | ⚠ Examiner-held device | ⚠ Partial | ✗ | ✓ Full on-premise Docker, zero outbound |
| UK data residency standard | ✗ | ⚠ Varies by vendor | ⚠ Varies by vendor | ✓ Default; all processing in-country |
| SIEM / platform integration | ✗ | ✓ Native | ⚠ Partial | ✓ REST API, WebSocket, Splunk, Sentinel, QRadar |
| Scientific validity (peer-reviewed basis) | ✗ NAS 2003: no consensus | ✓ Behavioural analytics | ⚠ Varies by metric | ✓ FACS — 40+ years peer-reviewed research |
| Insider threat continuous vetting | ✗ Point-in-time only | ⚠ Digital behaviour only | ⚠ Physiological only | ✓ Emotional baseline + anomaly scoring |
| Operator readiness / fatigue monitoring | ✗ | ✗ | ⚠ Heart rate / SpO2 only | ✓ Cognitive load, arousal, fatigue AU patterns |
| Structured audit-ready output | ⚠ Examiner report only | ✓ Log events | ⚠ Raw sensor data | ✓ Timestamped JSON, DSAT-compatible records |
| UK GDPR / biometric data compliant | ⚠ Jurisdiction-dependent | ✓ | ⚠ Device-dependent | ✓ Pseudonymisation, RBAC, full audit log |
Polygraph has a false negative rate of up to 47%.
The US National Academy of Sciences concluded in 2003 that polygraph lacks sufficient scientific validity for security screening. Courts and procurement bodies are aware. You need something that withstands scrutiny.
in controlled studies
technology was built
EchoDepth reads per frame
latency — no examiner needed
Polygraph — Fundamental Limits
- ✗Measures anxiety, not deception — highly susceptible to countermeasures
- ✗Requires physical sensors attached to the body — invasive and impractical in many settings
- ✗Examiner-dependent — subjective, not reproducible, not auditable
- ✗Point-in-time only — no continuous vetting capability
- ✗Output is a subjective report, not a structured data record
EchoDepth — FACS-Grounded Alternative
- ✓44 peer-reviewed FACS Action Units per frame — involuntary facial signals, not peripheral anxiety proxies
- ✓Camera only — no physical contact, works in interview rooms, training environments, SCIF
- ✓Timestamped per-question stress and suppression output — auditable, reproducible, legally defensible
- ✓No specialist examiner required — AI-driven, consistent across sessions and operators
- ✓DSAT-compatible structured output — integrates directly into intelligence and procurement workflows
EchoDepth position: EchoDepth is a scientifically grounded polygraph alternative, not a replacement claim. It uses 44 peer-reviewed FACS Action Units to produce timestamped, structured, legally reviewable output — addressing the core weaknesses of polygraph without requiring physical contact, an examiner, or specialist hardware. It does not claim to detect lies; it surfaces involuntary physiological correlates of deception, stress, and suppression.
UEBA monitors digital behaviour. It cannot see the human behind it.
User and Entity Behaviour Analytics platforms are excellent at detecting anomalous digital activity — unusual file access, off-hours logins, lateral movement. They have zero visibility into the emotional and cognitive state that precedes those events. EchoDepth fills that gap and feeds directly into your existing SIEM.
UEBA / SIEM — What It Misses
- ⚠Monitors digital behaviour only — no signal until an act has already occurred
- ⚠No visibility into emotional or cognitive state preceding an insider incident
- ⚠High false positive rates when used alone — alert fatigue in SOC teams
- ⚠Cannot distinguish between stressed and compromised operator behaviour
EchoDepth — The Human Layer
- ✓Continuous emotional baseline per individual — surfaces anomalies before digital events occur
- ✓REST API and WebSocket integration with Splunk, Microsoft Sentinel, IBM QRadar
- ✓Enriches SIEM alerts with human-layer context — reduces false positive triage time
- ✓Structured JSON with ISO 8601 timestamps — compatible with standard SIEM alert schemas
EchoDepth position: EchoDepth is designed to complement UEBA and SIEM, not replace it. Digital behaviour monitoring and human emotional monitoring are two different layers of the same problem. EchoDepth provides the pre-digital signal that tells you why an anomaly may be occurring — and integrates directly into your existing security stack.
Wearable sensors read the body. EchoDepth reads the face — without touching anyone.
ECG patches, GSR bands, and smart watches provide useful physiological data. They require the subject to wear a device, which creates friction in operational environments and introduces movement artefact. EchoDepth needs only a camera already in the room.
Wearable Sensors — Operational Limits
- ✗Physical attachment required — not viable in many interview, training, or operational contexts
- ✗Movement artefact and environmental noise degrade signal quality
- ✗Device management, charging, and hygiene overhead in operational deployments
- ✗Peripheral physiological signals (heart rate, GSR) — not facial Action Unit analysis
- ✗Not SCIF or air-gap deployable in most configurations
EchoDepth — Non-Contact Alternative
- ✓Standard RGB camera at 720p minimum — existing CCTV, interview room cameras, or laptops
- ✓No physical contact with the subject at any stage — zero friction, zero device overhead
- ✓44 FACS facial Action Units — richer and more nuanced signal than peripheral physiological data alone
- ✓Fully air-gap and SCIF deployable — no wireless, no cloud, no external transmission
- ✓~700ms latency — real-time operator readiness and fatigue monitoring without wearables
EchoDepth position: EchoDepth is not positioned against wearable sensors as a category — physiological data and facial AU data are complementary. For environments where wearing a sensor is impractical, intrusive, or operationally unacceptable, EchoDepth provides a non-contact alternative that reads a richer facial signal from existing camera infrastructure.
Every tool covers one layer.
EchoDepth covers the one none of them can.
Scientifically grounded. No contact. Auditable.
FACS-based AU analysis with peer-reviewed methodology. Structured evidential output. No examiner. Deployable anywhere a camera exists.
Pre-digital signal. Human layer. Integrated.
Surfaces emotional anomalies before they become digital events. REST API feeds directly into Splunk, Sentinel, QRadar. Complementary, not competitive.
Non-contact. Camera only. SCIF deployable.
44 FACS Action Units from existing camera infrastructure. No device management. No physical attachment. Air-gap ready.
Structured briefings available for defence procurement, CISO, and intelligence teams. NDA available. Air-gapped demo environment on request.
What procurement teams ask when comparing tools
Polygraph uses physical contact sensors to measure respiratory rate, skin conductance, and cardiovascular response — proxies for anxiety, not deception. The US National Academy of Sciences concluded in 2003 that polygraph lacks scientific validity for security screening, with false negative rates as high as 47%. EchoDepth analyses 44 FACS-compliant facial Action Units per frame, producing timestamped structured output mapped to VAD space. No physical contact. No specialist examiner. Audit-ready digital record.
UEBA and SIEM monitor digital behaviour — file access, login anomalies, network activity. They have zero visibility into the human emotional state that typically precedes an insider incident. EchoDepth provides continuous emotional baseline profiling per individual and surfaces anomalies before they become digital events. The two systems are complementary: EchoDepth integrates directly into SIEM platforms via REST API, feeding the human-layer signal into existing security infrastructure.
Wearable sensors require physical attachment, introducing friction in operational environments, consent complexity, and device management overhead. EchoDepth uses only a standard RGB camera, requires no contact, and works on existing CCTV or interview infrastructure. It extracts 44 FACS-compliant AU signals in real time without any sensor on the subject's body — and is fully air-gap and SCIF deployable.
EchoDepth is designed as a scientifically grounded complement or alternative to polygraph. It does not claim to detect lies — it surfaces involuntary physiological markers correlated with deception attempts, stress, and suppression. Whether it replaces or augments polygraph depends on the specific procurement framework. EchoDepth produces DSAT-compatible audit records and can be deployed alongside existing vetting processes or as a standalone capability under NDA.
Yes. EchoDepth exposes a REST API and WebSocket interface for real-time integration with SIEM, SOAR, GRC, and C2 platforms. Native integration support includes Splunk, Microsoft Sentinel, and IBM QRadar. Python and Node.js SDKs are available. All data is structured JSON with ISO 8601 timestamps, confidence weightings, and per-AU breakdown — compatible with standard SIEM alert schemas.