AI Governance

When Digital Twins Become Digital Liabilities

A digital twin is a continuous model of a person. Built without psychometric scaffolding, it accumulates errors for years before anyone notices. The audit-defensible alternative is named, and it is not subtle.

by Prof. Llewellyn E. van Zyl (Ph.D)2 May 20263 min read
Back to Blog
Editorial cover image for Digital Twins as Liabilities

As featured on

702
American Psychological Association
BBC
Beeld
Forbes
Frontiers in Psychology
HR Square
Inspiring
IPPA
Medium
Mindful
NWU Optentia
Psynalytics
Psychology Today
SABC 3
SIOPSA
Welcome to the Jungle
Zorgvisie

Key Takeaways

  • A digital twin without psychometric validation is not a digital twin. It is surveillance with a friendly UI.
  • Digital twins live for years. The drift and contestability layers carry more weight than they do for one-shot assessments.
  • The legal exposure compounds with the model: a wrong inference that no one corrected for two years is a much bigger problem than a wrong score on one test.

The seductive idea, and the catch

A digital twin for a person is the most powerful idea in modern wellbeing science. A continuously-updated computational representation of an individual: their psychological state, behavioural patterns, contextual environment, history. Used to personalise interventions, predict trajectories, inform decisions. The vision is precise care at population scale, finally.

The catch is that the object being modelled is a human being. The model can be wrong about people in ways a model can never be wrong about a turbine. And the wrongness compounds over years.

Industrial digital-twin thinking does not transfer

Digital twins were born in industrial settings. Model a turbine, predict bearing wear, schedule maintenance. The constructs are well-defined (RPM, temperature, vibration). The system has no rights. The errors are recoverable.

None of that is true for a wellbeing digital twin. Flourishing is a contested theoretical construct. The person being modelled has GDPR rights, the right to be wrong about themselves, and the right to disagree with the model. Errors are not recoverable in the way a worn bearing is.

The full reframing is at the wellbeing digital twins hub.

Three failure modes that turn a twin into a liability

Three patterns recur across audits this site has run.

  1. The twin shapes the person it claims to model. Interventions delivered by the twin change the person's behaviour. The new behaviour feeds back into the twin. The construct migrates. A twin that started modelling flourishing ends up modelling engagement with the twin itself. Detectable. Almost no twins are watched for it.
  2. The wrong inference is invisible. The twin scores the person privately. The person never sees the inference, cannot question it, cannot correct it. The wrongness compounds for two years. Then it surfaces in a decision the person cannot trace back to the model.
  3. Consent does not match the model's memory. The person consented to one datastream. The twin retained the inferences derived from it. The person revokes consent. The vendor deletes the raw data and keeps the inferences. Under GDPR, that is non-compliance. Under any meaningful ethics framework, it is a breach.

The audit-defensible alternative

Five layers, on a continuous cadence, with a named owner per layer. Construct, Calibration, Cohort, Drift, Contestability. The framework is AI-IARA, and the discipline behind it is AI psychology. The Drift and Contestability layers do most of the heavy lifting for digital twins because the twin is longitudinal, not one-shot.

An audit-defensible twin documents what it claims to model, proves it scores equivalently across people, validates in the deployment cohort, names the drift signals and rollback owner, and gives the modelled person a procedural path to question and correct the model. Twins that have one or two of those are research prototypes. Twins that have all five are the only kind that should be near a real wellbeing programme.

What to do today

If you are evaluating a wellbeing digital twin vendor, ask for evidence at the five Validity Stack layers. If the vendor does not produce all five, the deployment is not defensible. If you have a twin already in production and you cannot answer the contestability question (how does the modelled person disagree with the model), pause feature work and fix that first. Everything else compounds the problem.

If you are building, run the AI-IARA self-assessment against your design before you ship more datastreams. 15 minutes; produces a risk dashboard.

Prof. Llewellyn E. van Zyl (Ph.D)

Prof. Llewellyn E. van Zyl (Ph.D)

Chief Solutions Architect

Psynalytics

Prof. Llewellyn E. van Zyl (Ph.D) is the leading voice in AI psychology. He designs, measures, and assures AI systems that make decisions about human beings.

The Science Behind Safe AI

Weekly insights on Artificial Intelligence, Wellbeing science, and the psychology of trustworthy systems. Join 1,000+ forward thinking professionals.

No spam, ever. Unsubscribe anytime. Privacy Policy