🍪 Vi använder endast nödvändiga cookies för optimal upplevelse.

Erigo använder endast nödvändiga cookies för att säkerställa att vår webbplats fungerar optimalt. Vår chattfunktionalitet som är en tredjepartstjänst inom EU sätter en cookie enbart för att tjänsten ska fungera och ingen personlig information sparas.

Vi använder inte andra tredjeparts- marknadsföringscookies eller cookies som delar din data med andra.

Därför finns det inga cookieinställningnar att godkänna eftersom vi inte profilerar dig.

Gå till innehållet

When support becomes steering

And what It does to our ability to see

We build systems to help us think, but in the same process, those systems change how we think. AI is no exception. What begins as support for analysis and decision-making can, over time, become a steering factor, not because the technology takes over, but because our own frames of reference gradually shift.

When AI shifts the frame of reference

Studies show that AI assistance can increase accuracy in the moment while reducing our ability when the support is removed. In a multinational study published in The Lancet Gastroenterology & Hepatology, adenoma detection among experienced colonoscopists dropped from 28% to 22% when AI support was removed. Research from MIT points to a similar trend in scientific communication: language models tend to generalise in ways that subtly change how we interpret and prioritise information.

This is not a sudden loss of competence. The change is quiet and cumulative—a gradual adaptation where our own work adjusts to the system’s logic.

From actor to observer

Automation bias is one of the most well-documented mechanisms behind this shift. When the system suggests an answer, the tendency to question decreases even if the suggestion is wrong. This applies not only in medicine or science but in any work environment where AI acts as a filter, guide, or decision-making tool.

The result is deskilling over time. The case of colonoscopists is not an exception but an early indicator. If skills degrade in a domain that requires such high precision, it is likely that we will see the same pattern in other sectors. We will likely see more reports and studies confirming this, and the need to introduce cognitive risk management will only grow. The earlier we take that path, the greater the chance we have of preserving and developing our skills rather than gradually losing them.

"When we let systems make our decisions, we train away our own ability.
The question is: what happens next?"
Katri Lindgren

Cognitive Integrity in practice

Cognitive integrity is the ability to preserve and use one’s interpretive capacity even when external systems deliver ready-made conclusions. It is about maintaining an internal structure for how information is evaluated, verified, and contextualised.

In practice, this means actively creating friction against passive confirmation logic. Sometimes reading more slowly. Deliberately choosing sources that are not pre-processed. Asking follow-up questions the system does not suggest. These are small actions that together form resistance to a development where support risks becoming steering.

The missing perspective: the system level

The issue of deskilling and automation bias is often reduced to individual ability or responsibility. But the greatest consequences occur at the system level, when organisational competence, decision pathways, and strategies are shaped by tools whose logic is optimised for speed of delivery, not necessarily for depth of understanding (synthetic safety).

Addressing this requires more than individual strategies. It requires designing AI integration with redundancy, manual checkpoints, and the ability for independent verification—and treating these as long-term investments, not as obstacles.

The next step in the discussion

AI has accelerated our work. The question is how we can use the technology as a catalyst for human skill, not as a replacement for it. The silent shift from actor to observer is hard to detect in real time, but easier to prevent if we build systems and cultures that protect cognitive resilience from the start.

That is where the next discussion needs to begin, not in whether we can use AI, but in how we can use it without losing the ability to think for ourselves.


Further reading and sources for figures and statements

Explore more perspectives in Metacognitive Reflections, an article series by Katri Lindgren.

Follow Erigo on LinkedIn

A part of Sweden's infrastructure for skills development.
Follow us on LinkedIn