Cognitive Impairment in the AI Era

The concept of cognitive impairment has traditionally described medical conditions such as dementia and MCI. Recent research points to a different form of cognitive decline that does not stem from neurological disease but from our dependence on digital tools and AI systems.

Unlike medical dementia, this is not about cell loss in the brain but about biological processes such as synaptic pruning and dendritic atrophy. This phenomenon, called deskilling, challenges our cognitive integrity in ways we are only beginning to understand.

What Is Cognitive Impairment?

Cognitive impairment is an umbrella term describing deteriorations in the brain's ability to function effectively. According to the National Institute on Aging, this encompasses difficulties with memory, attention, processing speed, and executive functions such as planning and decision-making.

In clinical practice, cognitive impairment exists on a spectrum. Mild cognitive impairment (MCI) involves a noticeable but non-disabling decline in cognitive abilities. People with MCI can still manage their daily activities but experience more memory difficulties and concentration problems than expected for their age. In more severe cases, cognitive impairment can develop into dementia, where the loss of cognitive function affects independence in daily life.

Traditionally, research has focused on neurological, metabolic, and medical causes of cognitive impairment: brain injuries, Alzheimer's disease, stroke, nutritional deficiencies, or hormonal imbalances. Over the past decade, a new branch of research has emerged examining how our technological environment can induce cognitive deteriorations of a different kind.

The biological foundation: synaptic pruning and dendritic atrophy

Unlike medical dementia, which often involves cell loss in the brain, technology-induced cognitive impairment concerns something else: underutilization of cognitive abilities that gradually atrophy. This is more than metaphorical. Real biological changes occur.

Research in neuroscience shows how the brain's structure adapts according to our usage. Two processes are particularly relevant:

Synaptic pruning: Synapses that are not used regularly weaken and disappear, while those activated frequently are strengthened. This is a natural process that normally optimizes brain efficiency, but in monotonous digital environments it can lead to the pruning away of capacity for deep thinking.

Dendritic atrophy: The nerve cells' branches, the dendrites, decrease when they are not stimulated. This affects our ability to process and connect information. Fewer dendritic branches mean fewer possible connections between ideas and concepts.

Studies show that the brain can restructure itself within weeks of new stimulation. Scholz et al. (2009) documented changes in white matter after six weeks of training a complex visuomotor skill (juggling) in adults. Similar research shows that both working memory and motor training produce measurable structural effects within four to eight weeks. The logic works both ways: if training can strengthen neural networks in weeks, absence of cognitive stimulation can weaken them on the same timescale.

This occurs naturally during childhood and adolescence as part of brain development. In adults, monotonous digital stimulation risks reducing the brain's flexibility and capacity for deep processing over time. This is not damage in the traditional sense. It is an adaptation to an environment that no longer requires certain abilities.

Digital dementia and technology-induced cognitive decline

The term "digital dementia" was coined by German neuroscientist Manfred Spitzer in 2012 to describe the cognitive decline that can arise from excessive use of digital technology. The term is controversial, but it points to a real phenomenon: when we outsource cognitive functions to external systems, we reduce the internal training of those functions.

Research shows that excessive technology use can change the brain's structure and function. Studies have linked frequent smartphone use to shorter attention spans and impaired memory consolidation. GPS use has been associated with impaired spatial memory through reduced activation of the hippocampus.

The research is mixed. A large meta-analysis published in Nature Human Behaviour (Benge & Scullin, 2025), encompassing over 400,000 adults, found that digital technology use actually correlated with 58 percent lower risk of cognitive impairment in older adults. The researchers suggest that digital tools can function as "cognitive scaffolding" that maintains functional independence even when underlying cognitive capacity deteriorates.

This paradox suggests that the relationship between technology and cognition is complex. Technology can both support and undermine cognitive function depending on how it is used. The key question is not whether we use technology, but whether technology replaces or enhances our own cognitive processes.

Cognitive offloading and critical thinking

A central mechanism in the discussion of technology's cognitive effects is "cognitive offloading." The concept describes how we delegate cognitive tasks to external tools to reduce the load on working memory. When we save phone numbers in our mobile instead of memorizing them, or use GPS instead of orienting ourselves, we offload cognitively.

In itself, cognitive offloading is not problematic. Humans have always done this: writing, notes, and libraries are all forms of external cognitive storage. AI tools differ qualitatively from previous technologies. They take over more than memory. They also assume analysis, synthesis, and reasoning.

A study by Gerlich (2025) with 666 participants in the United Kingdom found a strong negative correlation between frequent AI use and critical thinking. The correlation between cognitive offloading and AI use was r = 0.72, while the correlation between cognitive offloading and critical thinking was r = -0.75. Younger participants (17-25 years) showed higher dependence on AI tools and lower scores on critical thinking compared to older participants.

The study also showed that education level functioned as a protective factor. Participants with higher education were more likely to critically examine AI-generated content rather than accept it uncritically. This indicates that conscious training in critical thinking can counteract the negative effects of AI dependence.

Deskilling: when help becomes Wwakening

Deskilling is a concept originally used in occupational research to describe how technological changes can make workers' skills obsolete. In the cognitive context, deskilling describes the gradual loss of skills through lack of practice. It is not a dramatic loss but a slow erosion of abilities we no longer exercise.

A telling example comes from a study published in The Lancet Gastroenterology & Hepatology (Budzyń et al., 2025) on colonoscopy. Experienced physicians who regularly used AI support for adenoma detection achieved a detection rate of 28.4 percent before AI was introduced. After becoming accustomed to AI assistance, their detection rate fell to 22.4 percent when working without AI support. These were experienced specialists with over 2,000 colonoscopies each, whose own ability had deteriorated after habituation to AI assistance.

Every time you let a system make a decision for you without reflecting, a cognitive outsourcing occurs. Over time, this changes not just how you think, but what you think is possible. Deskilling manifests across several domains:

Linguistic deskilling

When AI systems generate text for us, we gradually lose the ability to search for the right word, weigh formulations, and structure thoughts. Lexical precision weakens, stylistic awareness thins out, and one's own voice is replaced by the statistically probable. We train ourselves out of writing with our own voice.

Analytical deskilling

AI summaries cause us to skip the steps of reading entire texts, holding perspectives in mind, comparing nuances, and synthesizing conclusions. We get the overview faster, but we train away the ability to hold complexity alive. Working memory training, perspective management, and synthesis ability atrophy.

Logical deskilling

Just as the calculator's arrival weakened mental arithmetic and GPS weakened spatial navigation, AI risks taking over our chains of reasoning. Hypothesis testing becomes rare, and the ability to test ideas independently withers. What was competence becomes dependence.

Three mechanisms behind the shift

Why does deskilling happen so quickly when AI tools are introduced? Research points to three central mechanisms:

Convenience: the brain's energy-saving mode

The brain is evolutionarily designed to conserve energy. It constitutes about two percent of body weight but consumes around twenty percent of energy. When AI offers immediate answers and well-formulated reasoning, it becomes harder to motivate our own effort. Just as we choose the elevator over the stairs for physical exertion, we choose the AI answer over our own reasoning for cognitive exertion. Why think when AI can reason?

Habituation: from aid to given

What begins as an aid quickly becomes the norm. The first time it is "wow, this is helpful," the tenth time "this is practical," the hundredth time "this is standard," and the thousandth time "why would I do it myself?" Generations are now growing up without ever training complex reasoning from scratch. What began as a shortcut becomes the only path.

Reliability: the comfort of always getting an answer

AI never hesitates and always delivers something. Psychological research shows that people prefer closure over ambiguity. Incorrect but confident AI answers are often experienced as more useful than "I don't know." This reinforces automation bias, the tendency to trust the system's output even when it is wrong. We get a synthetic comfort that replaces the uncertainty necessary for genuine thinking.

Cognitive Integrity as resilience

Cognitive integrity is a concept describing the ability to maintain a coherent and autonomous interpretive structure despite systemic fragmentation, algorithmic influence, and dopamine-driven design. It involves retaining the capacity to process information with awareness of origin, interpretation, and emphasis.

Unlike traditional cognitive impairment, which is often irreversible, technology-induced deskilling can potentially be counteracted. This requires conscious strategies and an understanding of what actually happens biologically when we stop using certain cognitive abilities.

Four dimensions of cognitive integrity are central:

Structural dimension: The ability to hold complex reasoning, maintain contextual understanding over time, and integrate new information with existing knowledge. This requires active dendritic networks and strong synaptic connections between different brain regions.

Metacognitive dimension: Awareness of one's own thought processes and the ability to reflect on how information is processed. Being able to pause and ask: where does this conclusion come from? Have I thought myself or just accepted?

System resistance: The capacity to resist external manipulation of attention and interpretive frames. Being able to identify when a system is trying to steer your thinking rather than support it.

Temporal dimension: The ability to maintain cognitive abilities over time, despite exposure to fragmenting environments. Building in structures for recovery and deep work into daily life.

Strategies for Preserving Cognitive Capacity

If the brain can change in weeks through new stimulation, it can also change through conscious training. Research points to several strategies for counteracting technology-induced cognitive decline:

Conscious friction design

Instead of always seeking the fastest solution, we can consciously introduce cognitive friction. Write yourself first before asking AI for help. Navigate without GPS sometimes. Calculate in your head before pulling out the calculator. These small resistances keep the cognitive muscles active and counteract synaptic pruning of unused abilities.

Critical examination of AI output

Studies show that education functions as a protective factor. Actively questioning AI-generated content, cross-checking with other sources, and reflecting on what assumptions underlie algorithmic recommendations strengthens critical thinking. Ask counter-questions, demand transparency, reconstruct the reasoning behind the result.

Training in deep work

Regular exposure to tasks requiring concentration and endurance counteracts the fragmentation that digital environments create. Reading longer texts, solving complex problems without aids, and engaging in deep conversations strengthens cognitive endurance. Allow ambiguity to remain unresolved, if only for a little while.

Balanced technology use

The meta-analysis in Nature Human Behaviour suggests that technology can be both protective and harmful. The key seems to lie in how we use it. Technology that supports our cognitive processes without replacing them entirely can be beneficial, while over-reliance undermines our abilities. Let the brain participate in the interplay, give it time to think for itself.

Implications for society

The cognitive effects of AI use extend beyond the individual. In organizations where AI is used for decision-making, there is a risk that collective competence erodes. When new employees are never trained in independent analysis but directly rely on AI tools, no internal expertise is built.

At the societal level, democratic questions arise. If large groups become accustomed to receiving rather than reasoning, it becomes harder to maintain a living democratic culture. Public conversations risk becoming exchanges of ready-made slogans rather than genuine exchange of meaning. Dialogue is replaced by system-generated messages.

Innovation capacity is also affected. When ideas are built on reused patterns instead of original thinking, interdisciplinary breakthroughs become harder. Minority perspectives are marginalized because they have low frequency in training data. Innovation risks becoming incremental rather than radical.

Conclusion: a new form of cognitive challenge

Cognitive impairment has traditionally been viewed as a medical condition with neurological causes. Research increasingly shows that the technological environment can also induce cognitive deteriorations through entirely different mechanisms: synaptic pruning of unused abilities, dendritic atrophy, cognitive offloading, deskilling, and erosion of critical thinking.

The good news is that this form of cognitive impairment is likely reversible. Unlike dementia diseases, it does not involve irreversible cell loss but underutilized abilities. The same neuroplasticity that makes us vulnerable to deskilling also makes us capable of training back lost abilities. Cognitive integrity can be rebuilt through conscious strategies.

The question is not whether we should use AI tools, but how. The balance between benefiting from technology's possibilities and preserving our cognitive abilities becomes one of our era's central challenges. Understanding the mechanisms behind technology-induced deskilling is the first step toward meeting that challenge.

Your brain is not a receiver. It is a generative system. Give it time to think for itself.


References

  • Benge, J. F., & Scullin, M. K. (2025). A meta-analysis of technology use and cognitive aging. Nature Human Behaviour, 9(7), 1405-1419.
  • Budzyń, K., et al. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. The Lancet Gastroenterology & Hepatology.
  • Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6.
  • Scholz, J., Klein, M. C., Behrens, T. E. J., & Johansen-Berg, H. (2009). Training induces changes in white-matter architecture. Nature Neuroscience, 12(11), 1370-1371.
  • Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676-688.
  • Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory. Science, 333(6043), 776-778.
  • Firth, J. et al. (2019). The "online brain": How the Internet may be changing our cognition. World Psychiatry, 18(2), 119-129.

About the author

Katri Lindgren is the founder of Erigo, a Swedish edtech company specializing in competency development. She has worked systematically with behavior in digital environments for nearly 20 years. Since the launch of large language models, she has explored how the brain is affected by AI tools and from this developed the concept of cognitive integrity as a framework for the work we need to engage with to simultaneously preserve our internal capacity for complex thinking.