The EchoDepth Engine

Facial Action Units.
Decoded at Scale.

FACS-compliant. Camera-only. API-first. On-premise deployable. EchoDepth extracts 44 facial Action Units per frame in near real-time — producing structured VAD output suitable for any defence, intelligence, or security integration.

Processing Pipeline

How EchoDepth Works: Five-Stage Processing Architecture

01

Camera Ingestion

Standard RGB camera feed — existing CCTV, laptop webcams, or dedicated hardware. Minimum 720p. No infrared, no specialist equipment required.

02

Face Detection & Landmark Mapping

68-point facial landmark detection at 30fps+. Robust to partial occlusion, varied lighting, and head rotation up to ±30 degrees.

03

Action Unit Extraction

FACS-compliant analysis of 44 facial Action Units — muscle group activations mapped to the Facial Action Coding System. Intensity scored 0–5 per AU per frame.

04

VAD Mapping & Temporal Smoothing

AU combinations mapped to Valence–Arousal–Dominance space. Temporal smoothing eliminates transient noise and surfaces meaningful state changes.

05

API & Integration Layer

REST API, WebSocket stream, or direct SDK. Structured JSON per frame, per second, or aggregated per session. Webhook alerting for threshold events.

Deployment Specifications

How is EchoDepth deployed in classified environments?

🔒 Air-Gap Ready

Full on-premise Docker deployment. SCIF-compatible. Zero external data transmission at any stage of processing.

⚡ ~700ms Latency

Sub-second emotional state output suitable for real-time alerting and live interview support applications.

🌐 API-First Design

REST and WebSocket. SDK in Python and Node.js. Integrates with SIEM, LMS, access control, and C2 platforms.

⚖ UK Data Residency

All data processed within UK borders as standard. Pseudonymised by default. Full audit logging. UK GDPR compliant.

Technical specifications

Action Units44 FACS-compliant
Latency~700ms
Min. camera720p RGB
DeploymentDocker / On-premise
Output formatJSON / REST / WS
SDK languagesPython · Node.js
Technical Questions

Frequently Asked

What camera hardware does EchoDepth require?+

EchoDepth requires a standard RGB camera at minimum 720p resolution. Existing CCTV infrastructure, interview room cameras, laptop webcams, and IP cameras all qualify. No infrared, no thermal imaging, and no specialist hardware is required. The system operates on standard server hardware with no proprietary components.

What is the EchoDepth API output format?+

EchoDepth outputs structured JSON per frame, per second, or aggregated per session — via REST API or WebSocket stream. Each output object contains per-AU intensity scores (0–5), VAD composite scores (Valence, Arousal, Dominance), derived state classifications (stress, deception, fatigue, engagement), and a full timestamp. SDK libraries are available in Python and Node.js.

Can EchoDepth be deployed in an air-gapped or SCIF environment?+

Yes. EchoDepth is fully containerised via Docker and deployable on-premise with zero external data transmission at any stage. No cloud dependency, no telemetry, no outbound network calls. All inference runs locally on standard server hardware. UK data residency is the default, and an air-gapped demo environment is available on request for vetted procurement teams.

What SIEM and integration platforms does EchoDepth support?+

EchoDepth integrates with Splunk, Microsoft Sentinel, IBM QRadar, and other SIEM platforms via REST API and WebSocket. SOAR playbook webhook triggers, GRC risk feed integration, LMS connectors (Moodle, SAP SuccessFactors, Cornerstone), and C2 platform interfaces are all supported. Full API documentation and integration support is available under NDA.

Briefings Available

See What Your Security Stack Is Missing.

Structured technical briefings for defence procurement, security leadership, and intelligence teams. NDA available. Air-gapped demo environment on request.

DEFENCE@CAVEFISH.CO.UK  ·  CARDIFF, WALES  ·  UK DATA RESIDENCY STANDARD