Ethics and Assurance Training

Ethics and assurance training is a practical capability-building programme that equips psychologists, designers, and engineers to build AI systems that affect people with defensible measurement, non-manipulative interaction design, and operational governance. This is not ethics as abstract principle. It is ethics as buildable requirements, evidence, and controls.

Prof. Llewellyn E. van Zyl (Ph.D) — Ethics and Assurance Training

As featured on

702
American Psychological Association
BBC
Beeld
Forbes
Frontiers in Psychology
HR Square
Inspiring
IPPA
Medium
Mindful
NWU Optentia
Psynalytics
Psychology Today
SABC 3
SIOPSA
Welcome to the Jungle
Zorgvisie

About This ServiceEthics and Assurance Training

Many teams can optimise model performance but cannot defend the human claims their systems imply, the trade-offs their interfaces create, or the governance needed after launch. The result is avoidable harm, reputational exposure, and compliance risk. The programme delivers shared language across disciplines for what safe means, repeatable workflows to translate values into measurable requirements, templates that make assurance operational, and hands-on application to your real system.

The programme is structured in two tiers. Core modules cover foundations for people-impact AI, assurance essentials, psychometrics for product teams, fairness and subgroup evaluation, human-centred design for high-stakes decisions, and governance basics. Advanced modules address AI-IARA capability design, behavioural model selection, generative AI safety, and monitoring and early-warning governance.

Get in Touch
Prof. Llewellyn E. van Zyl (Ph.D) — Ethics and Assurance Training
Model

The TRAIN Model

Six capability areas we build across your team.

01

Truthful claims and scope discipline

Define what the system is allowed to claim, what it must not claim, and where human oversight becomes mandatory.

02

Responsible human model design

Build ethically constrained user models using data minimisation, purpose limitation, and uncertainty-aware reporting.

03

Measurement integrity for human attributes

Establish construct clarity, validity logic, reliability limits, and no-false-precision reporting for any system that measures psychological outcomes.

04

Fairness that survives heterogeneity

Learn subgroup-aware evaluation, bias mechanisms, and practical remediation strategies rather than single-number fairness theatre.

05

Psychologically safe interaction design

Design for agency and comprehension, avoid coercive choice architectures, and implement contestability, override, and opt-out as first-class features.

06

Governance, monitoring, and incident readiness

Operationalise risk registers, change control, documentation packs, post-deployment monitoring, and clear stop-or-fix escalation rules.

Core modules cover foundations, assurance essentials, psychometrics, fairness, human-centred design, and governance basics. Advanced modules address AI-IARA capability design, behavioural model selection, generative AI safety, and monitoring governance.

Deliverables

What You Receive

Practical tools and templates your team can use immediately after the programme.

Separate playbooks for psychology, design, and engineering roles, plus a shared cross-functional workflow.
Prof. Llewellyn E. van Zyl (Ph.D) — Role-Specific Playbooks

Process

Our Approach

A structured, step-by-step methodology tailored to every engagement.

Step 01

Skills Diagnostic

Map your system’s human impact, claims, and risk profile. Identify skill gaps by role across psychology, design, engineering, and governance.

Step 02

Curriculum Tailoring

Step 03

Hands-On Workshops

Step 04

Assurance Simulation

Step 05

Embed and Hand Over

StepTitleDescription
01Skills DiagnosticMap your system’s human impact, claims, and risk profile. Identify skill gaps by role across psychology, design, engineering, and governance.
02Curriculum TailoringSelect modules and exercises that match your actual product architecture and deployment constraints.
03Hands-On WorkshopsTeams apply methods to their own system artefacts, with facilitated design reviews and evidence checks.
04Assurance SimulationRun a mock internal audit: evidence map, risk register, documentation pack, and remediation priorities.
05Embed and Hand OverDeliver templates, checklists, and governance routines so capability persists after the workshops.
Audience

Who This Is For

This service is designed for organisations and teams navigating the intersection of AI, people, and accountability.

Psychologists and Assessment Leads

Professionals supporting AI-mediated decisions who need practical evaluation skills.

Product Designers

Designers building high-trust and high-stakes user experiences with AI systems.

ML and Engineering Teams

Engineers deploying models into production who need psychometric literacy and governance skills.

Governance and Compliance Teams

Teams building internal AI controls and preparing for regulatory requirements.

Executive Leadership

Leaders who need a board-ready assurance narrative and decision discipline for AI systems.

Standards

Frameworks & Standards

Every engagement is anchored to recognised standards and frameworks for accountability and rigour.

Engagement model: 1 to 2 days for foundations for cross-functional teams. 3 to 5 days for full programme with system-specific labs and assurance simulation. Multi-cohort option for large organisations, with follow-up clinics for implementation support.

Ready to Get Started?

Let\u2019s discuss how this service can support your needs.