Digital Twins for People

Precision Wellbeing at Scale, with the Validity to Hold Up

A digital twin for people is a continuously updated computational model of an individual's psychological state, behavioural patterns, and context. It is the most powerful, and the most fragile, idea in modern wellbeing science. The AI-IARA framework is what keeps it from collapsing under its own weight.

Prof. Llewellyn E. van Zyl (Ph.D). Digital twins for precision wellbeing.
Digital Twin (Wellbeing)

A digital twin for wellbeing is a continuously-updated computational representation of an individual's psychological state, behavioural patterns, and contextual environment, used to personalise interventions, predict trajectories, and inform decisions about that person's flourishing. Unlike a digital twin of a turbine or a factory floor, the object being modelled is a human being. The construct can drift, the data can be misread, the inferences can be wrong, and the person being modelled has rights the turbine does not. Industrial digital-twin thinking does not survive the move to people. People-grade digital twins need a different scaffolding: psychometric validity, consent architecture, contestability, and continuous drift monitoring.

If you are pitching a digital twin product without psychometric validation, measurement invariance, and a consent path, you are not pitching a digital twin. You are pitching surveillance with a friendly UI.

The Method

AI-IARA. The framework that keeps a digital twin honest about people.

The same six capacities that govern AI assessments govern digital twins, with one change in emphasis. Digital twins live in the wild for years; the Drift and Contestability layers carry more weight than they do for one-shot assessments. The AI-IARA framework names what to watch and how to roll back.

AI-IARAFramework
Awareness
Interpretation
Intention
Action
Relational Agency
Autonomy
The Validity Stack

Five layers a wellbeing digital twin must defend

Every digital-twin audit produces evidence at five layers, with extra weight on layers four and five because the twin is a longitudinal object. A twin that passes one layer but fails another is not deployable.

Step 01

Construct

Define the constructs the twin models in language an independent psychometrician can review. Wellbeing, engagement, resilience, burnout risk, and stress are not interchangeable. The twin must name the constructs, scope them to specific theoretical models (PERMA, SDT, JD-R, person-environment fit), and document the operationalisation choices.

Step 02

Calibration

Step 03

Cohort

Step 04

Drift

Step 05

Contestability

StepTitleDescription
01ConstructDefine the constructs the twin models in language an independent psychometrician can review. Wellbeing, engagement, resilience, burnout risk, and stress are not interchangeable. The twin must name the constructs, scope them to specific theoretical models (PERMA, SDT, JD-R, person-environment fit), and document the operationalisation choices.
02CalibrationTest measurement invariance across the demographic, cultural, and longitudinal contexts the twin will operate in. A twin that signals equivalent flourishing levels for two people who are not equivalently flourishing is producing systematic harm. Calibration evidence is the difference between a research prototype and a deployed system.
03CohortValidate the twin in samples that match the deployment population, not just the convenience cohort it was trained on. Differential validity by tenure, role, geography, language, and life stage. Without cohort-specific evidence, the deployment is generalising from the training set into the unknown.
04DriftDigital twins live continuously. They are exposed to feedback contamination, proxy collapse, and population drift in ways one-shot assessments are not. Specify the signals you will watch (inter-rater divergence, KPI decoupling, response-pattern shifts), the thresholds that trigger pause or retraining, and the named owner with rollback authority. Annual re-validation is not enough.
05ContestabilitySpecify how the human represented by the twin can see the model's inferences, question them, and request correction. Contestability for digital twins is materially different from contestability for one-shot assessments because the model has memory; an uncorrected wrong inference compounds over years. The contestability layer is the audit defence and the ethics defence at the same time.
People Also Ask

Common questions about wellbeing digital twins

Proof Stack

The Authority Behind This Page

Every claim on this page is anchored in two or more independent proof types: peer-reviewed publications, third-party speaking engagements, formal standards, and named institutional roles.

Standards Cited

  • AERA, APA, and NCME Standards for Educational and Psychological Testing
  • ITC Guidelines on Psychological Testing
  • GDPR (EU) and UK GDPR special-category personal data provisions
  • EU AI Act, Annex III high-risk people-impact provisions
  • ISO/IEC 42001 AI Management Systems
  • PERMA, SDT, JD-R, and person-environment-fit theoretical models

Institutions

  • Optentia Research Unit, North-West University
  • Centre for Behavioural Engineering and Insight, University of Twente
  • Frontiers in Psychology, Editorial Board
  • Psynalytics (Chief Solutions Architect)
  • Springer Nature, Editorial Affiliations

Subscribe to the AI Psychology newsletter

Lessons from peer-reviewed publications, deployed audits, and live case studies on why most people-impact AI fails, including what wellbeing digital twins miss most often.