AI Governance

The Quiet Erosion of Competence: How AI is Making Experts Forget How to Think

In a world where smart machines assist every decision, are we quietly losing the ability to think for ourselves?

by Prof. Llewellyn E. van Zyl (Ph.D)18 May 20255 min read
Back to Blog
AI Human

As featured on

702
American Psychological Association
BBC
Beeld
Forbes
Frontiers in Psychology
HR Square
Inspiring
IPPA
Medium
Mindful
NWU Optentia
Psynalytics
Psychology Today
SABC 3
SIOPSA
Welcome to the Jungle
Zorgvisie

Key Takeaways

  • AI-induced skill decay is happening in cognitively demanding professions where AI handles the parts of the task people previously mastered.
  • Experts may remain unaware of their declining skills, mistaking AI-augmented performance for personal competence.
  • Learners who train with AI may perform well initially but fail to build deep, transferable knowledge.
  • AI tools can create illusions of understanding—feelings of confidence without capability.
  • The research community must urgently study how, when, and why AI weakens human cognition and design safeguards that preserve expertise.

Why You Should Care

In the age of "smart everything," knowing how we think is just as important as what we do. This article is a warning not against AI, but against the quiet erosion of our most human capability: the ability to understand and decide. It calls for a radical rethink of what counts as expertise in an AI-augmented world. Not just what people can do, but what they still understand.

Introduction

I once heard about a piano player who became famous due to his brilliant ability to improvise music on the spot. He would sit down at the piano, close his eyes, and just start pounding at the keys. He would literally lose himself in the music. But one night after a performance, an audience member asked him how he learned to play music just by his intuition. "Easy," he said. "I used to practice for hours. Now, I just play what the piano tells me."

This may sound quite absurd, until you realize that this trend is happening all around us and it's quite concerning.

Radiologists are being trained with image classifiers that flag potential tumours before they even take a look at the scan themselves. Surgeons receive suggestions on how to improve the surgery half-way through an operation from the machine learning systems that guide their tools—almost like a GPS for the human body. As an outsider, I really marvel at the speed, the precision, the convenience of these tools. And we should. But there's something else happening here—something quieter—something darker.

Our practical skills are fading.

Not because we're getting dumber, but because the machines are getting too good. They're so good, in fact, that we no longer notice the gap between what we think we know and what we've handed over.

The Automation Paradox

This story starts, not with AI, but with autopilot. In the early days of aviation, flying a plane required pilots to pay relentless attention to every small thing during the flight. Then came automation systems that kept the wings level, managed the altitude, and more recently even landing the plane for them. Pilots got a break. Fatigue dropped. Safety improved. But something strange started to happen: in emergencies, pilots started to make even more mistakes. Manual flying skills had quietly rusted away and by the time they were needed, they were no longer there.

Now take that logic and apply it to AI. But this time, it's not about keeping wings level. It's about decision-making. Pattern recognition. Judgment. What's at risk isn't dexterity or cognitive flexibility—it's expertise.

And expertise, as it turns out, is quite a fragile commodity.

The Disappearing Act

Skills decay isn't new and there has been a significant amount of research on it over the years. We've known for decades that when we stop practicing something, we forget it. But AI introduces a rather new twist to this tale. We now don't stop doing the task. We just stop thinking through it.

  • You can still diagnose the illness, but the AI has already highlighted the abnormality for you.
  • You can still conduct the surgery but the AI has pre-planned the entire path you have to take.
  • You can still make the final call but the recommendation of what is the most appropriate next action to take is blinking on your screen.

In other words, you're performing. But you're not learning. You're not refining your internal map of the problem. You're following a breadcrumb trail left by someone or something else.

And here's the kicker: you feel smarter—but you're actually getting dumber.

And that's the danger here. Because when skill decay happens ever so subtly, when it hides behind high performance, it becomes almost impossible to catch. Well—until the system goes down. Until you're asked to make a decision on your own. Until there's a problem the AI has never seen before.

The Illusion of Mastery

But it's not just the experts that are seeing a decline in skills. Students are in trouble too. A medical student trained with an AI diagnostic tool might outperform their peers on assessments. But what happens when the tool is gone? How deep was their understanding of the diagnostic process in the first place? Did they learn to see or just to trust what the machine says?

This is what we call the illusion of explanatory depth and it refers to the belief that we understand something better than we actually do. AI doesn't just help us or assist us but rather it amplifies that illusion. It gives us the right answer, wrapped in high levels of confidence, that's neatly dressed in data. But it never asks you to explain why it's right.

We used to struggle with problems to understand them. Now we consult the oracle.

A New Kind of Intelligence

So what's the solution? Ban AI from classrooms? Pull it from operating rooms? Of course not. These systems are astonishing and make such an amazing contribution to our lives on a daily basis. They save time. They save lives. But they also demand a new kind of thinking—not just about what they do, but about what we stop doing when they enter the room.

What if AI tools were designed to provoke questions instead of just giving answers? What if they helped learners construct mental models instead of bypassing them? What if they were built to challenge judgment rather than replace it?

The answer isn't less AI. It's better human-machine collaboration.

The Real Test

A good test of expertise isn't how someone performs when everything's working. It's how they respond when it's not. In the future, our best performers won't be those who've memorized the most AI outputs and learned how to use the best AI tools—but rather those who still know how to think when the lights go out.

Because someday—maybe in an ER, maybe in a war room, maybe in a courtroom—something will happen that no AI has seen before. And someone will need to make a decision.

Let's hope they remember how.

Conclusion

If you're designing AI systems, rethink the interface. If you're training professionals, test what happens when the AI is turned off. And if you're leading teams, start asking not just how fast people are getting things done but whether they're still doing the thinking themselves.

Because the smartest tool in the room should still be the human.

References

Macnamara, B. N., Berber, I., Çavuşoğlu, M. C., Krupinski, E. A., Nallapareddy, N., Nelson, N. E., & Ray, S. (2024). Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers' awareness?. Cognitive Research: Principles and Implications, 9(1), 46-55.

Prof. Llewellyn E. van Zyl (Ph.D)

Prof. Llewellyn E. van Zyl (Ph.D)

Chief Solutions Architect

Psynalytics

Prof. Llewellyn E. van Zyl (Ph.D) is a multi-award-winning psychologist and data scientist, and one of the leading voices on building psychologically safe and ethically governed artificial intelligence systems.

The Science Behind Safe AI

Weekly insights on Artificial Intelligence, Wellbeing science, and the psychology of trustworthy systems. Join 1,000+ forward thinking professionals.

No spam, ever. Unsubscribe anytime. Privacy Policy