The question is not whether emotions can be measured — it is whether the measurement framework is precise enough to be operationally useful. Discrete emotion labels — happy, angry, fearful — were developed for cross-cultural psychology experiments in which subjects posed expressions on request. Defence environments present a different problem: blended, suppressed, and contextually modified emotional states in high-stakes interactions where the consequences of classification error are significant. Dimensional emotion models are the scientific response to that problem.
From categories to dimensions: the scientific shift
Paul Ekman's influential work in the 1960s and 1970s established that certain facial expressions — his six basic emotions — are recognised cross-culturally. This finding was important: it demonstrated a biological component to emotional expression. The error was in assuming that six discrete categories could adequately represent the space of human emotional experience, and that posed laboratory expressions would generalise to naturalistic behaviour in operational settings.
The scientific literature moved on from pure categoricalism through the 1980s and 1990s. James Russell's 1980 circumplex model demonstrated that emotions organise themselves in a two-dimensional space — not in discrete clusters — with Valence and Arousal as the primary axes. Adjacent emotions in the circumplex share similar dimensions: anxiety and excitement are both high-arousal, but differ in valence. This was a fundamental reconceptualisation: emotions are not discrete natural kinds but regions in continuous dimensional space.
Mehrabian extended this to three dimensions, adding Dominance as a third axis. Lisa Feldman Barrett's later work on core affect and the theory of constructed emotion further challenged categoricalism, proposing that discrete emotion categories are culturally constructed labels imposed on continuous affective experience. The scientific consensus has moved decisively toward dimensional frameworks — a movement that has profound implications for how emotion recognition systems should be designed.
The three principal dimensional frameworks
Two-dimensional circular arrangement of emotions. Well-validated, widely used in affective computing. Limitation: no Dominance axis, so fear and anger share similar coordinates despite being operationally distinct.
Three-dimensional framework. The Dominance axis is the critical addition for security applications — it distinguishes fear from anger, confidence from agitation, compliance from contempt. EchoDepth's primary output framework.
Proposes that discrete emotion labels are constructed interpretations of core affective states. Reinforces the dimensional position. Influential in clinical and cognitive neuroscience contexts.
Why Dominance is the operationally critical dimension
The circumplex model — Valence and Arousal only — is insufficient for defence and security applications because it cannot distinguish states that share similar valence and arousal coordinates but are operationally distinct. The most important example is the fear-anger pairing.
Fear and anger both involve high negative valence and high arousal. On a two-dimensional circumplex, they occupy similar space. In operational reality, they are completely different: fear involves low perceived control (low Dominance); anger involves high perceived control (high Dominance). The implications for credibility assessment, threat assessment, and personnel monitoring are significant. A subject who is genuinely frightened responds differently — in their behaviour, their decision-making, their physiological state — from a subject who is angry. A system that cannot distinguish these states is operationally compromised.
The same distinction applies to: sadness vs contempt (similar valence, opposite Dominance); confidence vs excitement (similar arousal, different Valence and Dominance); and anxiety vs determination (similar Arousal, opposite Dominance). In each case, the third dimension — Dominance — is what makes the distinction. EchoDepth's VAD output provides all three dimensions as continuous values per frame.
Blending, suppression, and the limits of categorical detection
Two further failure modes of categorical classifiers are particularly acute in defence intelligence contexts.
Blended states
Emotional states in operational environments are routinely blended. An individual being assessed for security clearance may simultaneously experience anxiety about the process, concentration on their responses, and residual hostility about the situation. These are not sequential — they co-occur, producing a facial configuration that activates multiple muscle groups in patterns that do not map to any single discrete emotion. A categorical classifier forces an assignment from its fixed label set. A dimensional model represents the blend as a coordinate: V: −0.38, A: +0.62, D: +0.19 — negative, activated, slightly in control — without falsely resolving the ambiguity.
Suppressed states
Suppression is the active inhibition of an emotional expression after its initiation. It produces a characteristic temporal pattern: partial Action Unit activation (the beginning of an expression) followed by rapid inhibition — Ekman's micro-expression, typically lasting 40–200ms. This pattern is a VAD trajectory event: a transient excursion into the emotional state's VAD coordinates followed by rapid return to neutral. It is detectable in VAD time-series analysis across frames. It is invisible to a discrete frame-by-frame classifier because the expression does not persist long enough to be classified, and its temporal structure — the signal — is discarded.
"Discrete categories are a convenient fiction. They help us communicate about emotional experience, but they do not correspond to discrete natural kinds in the brain, the body, or the face. The underlying reality is dimensional."
— Lisa Feldman Barrett, How Emotions Are Made (2017)Dimensional models and auditability in UK defence contexts
UK defence and intelligence procurement increasingly requires AI systems to be explainable, auditable, and challengeable. A system that produces a label — "deceptive", "hostile", "stressed" — cannot satisfy these requirements. The label is an assertion; it cannot be decomposed, reviewed, or legally defended.
A system that produces dimensional output — V: −0.52, A: +0.81, D: +0.14 at timestamp 14:32:01.083Z, derived from AU4, AU5, AU7, AU17, AU20 at confidence 0.91, representing a −0.68 Valence deviation and +0.54 Arousal deviation from individual baseline — produces a documentable, reviewable, challengeable finding. The evidence chain is complete. This is not a minor procedural preference: in personnel security contexts, the difference between an assertion and documented evidence is the difference between a defensible finding and a challenge that cannot be answered.
EchoDepth's structured JSON output per frame is designed with this requirement as a primary constraint. Every dimensional score is timestamped, confidence-weighted, and traceable to the specific Action Unit evidence that produced it. See the full VAD methodology reference for output format detail.
EchoDepth's implementation of dimensional emotion modelling
EchoDepth implements the full three-dimensional VAD model as its primary representational output, connected to a 44-AU FACS measurement layer through a learned AU-to-VAD mapping. The mapping accounts for AU combinations, relative intensities, temporal sequencing, and cultural calibration across 14 cohorts — producing VAD triples per frame at up to 30fps with ~700ms end-to-end latency.
Downstream of the VAD output layer, three analytical functions operate: anomaly detection against individual baselines, trajectory analysis for suppression detection, and readiness scoring for operator state monitoring. In each case, the dimensional representation is what enables the analytical function — discrete labels would not support baseline deviation scoring, trajectory analysis, or continuous readiness monitoring in any operationally useful form.
For procurement and technical evaluation detail, see the EchoDepth technical architecture page. For the specific VAD framework reference, see VAD emotion model: valence, arousal and dominance explained.
Dimensional emotion output built for defence procurement
FACS-grounded. VAD-dimensioned. Auditable per-frame JSON. SCIF-deployable. UK data residency by default.