When we let machines think for us, we risk forgetting what it feels like to think.
It's one thing to get help. Another to hand over the processing itself. Right now, we're moving rapidly toward a state where mental friction is being smoothed away. Where answers arrive before we've had time to formulate the questions. Where external intelligence grows while internal intelligence stalls.
We're at a stage that requires reflection before we gallop too far ahead.
We've long talked about AI as a tool. But when the tool begins to formulate our thoughts, determine our priorities, and put words to what we haven't yet understood, we need to ask a different question: What happens to our ability to think, feel, formulate, remember?
Research already shows traces in the brain
A recent study from MIT "Your Brain on ChatGPT" shows that heavy use of AI like ChatGPT reduces neural activity in the brain, not just compared to thinking on your own, but also compared to using search engines.
Participants who wrote with the help of ChatGPT exhibited:
- Fewer connections between brain regions (EEG)
- Poorer memory recall
- Lower creative variation in text
- Weaker engagement in prefrontal and visuospatial activation
The group that used AI without their own reflection didn't just become more passive—their language became more cryptic, more uniform, and more dependent on templates.
The researchers' conclusion: cognitive activity is offloaded so strongly that the brain stops engaging its own resources. This is where efficiency can begin to equate with dulling.
Cognitive Offloading or gradual dismantling?
Relying on AI means cognitive offloading, but somewhere along the line, it risks becoming cognitive loss. Synaptic pruning teaches us that what we don't use, withers. Dopamine-driven logic teaches us that fast wins, regardless of value.
When we get used to receiving ready-made answers without our own friction, our attention, our endurance, and our cognitive map change.
When we don't practice interpretation, our interpretive capacity begins to weaken. It's in interpretation that consciousness deepens.
The invisible normalization
There's a notion that AI is neutral. But as the article Bias in AI: Weightings That Shape Our Worldview shows, AI's output is shaped by how the model was trained, which data was weighted highest, and which signals were prioritized.
It's not just about which answer we get, but which structure of understanding becomes normalized through repeated responses.
What we see as efficient output is often a product of:
- Selection of training data
- Weighting of probabilities
- Optimization for answers, not insight
When we consume this without reflection, we don't just lose the interpretive muscle—we also retrain what feels natural, true, and legitimate. AI's output becomes part of our conceptual structure. And if we don't know how it's shaped, we begin thinking with others' priorities.
We need to secure the brain's movement
This isn't a reckoning with technology. It's an attempt to secure our cognitive presence in a time when the mechanical is offered as the human.
But we need to talk about the structural effect:
- How AI affects our mental workflows
- How dopamine and shortcuts change how we learn
- How our cognitive integrity is threatened when our internal connections are dimmed
A crossroad for thought and technology
We find ourselves at technology's expansion where society can be explored and challenged exponentially, but where we also capture our cognition's capacity. It places us at a crossroads: between letting the structure be shaped for us, or actively participating ourselves in ensuring our cognitive foundation retains its strength.
It's entirely possible to weave together human reflection and mechanical support. But it requires us to be active in our movement. That we don't let technology carry us away from the ability to think, understand, and reshape.
What is trained is shaped. And right now, our brains are being shaped by what we choose not to do ourselves.
Further Reading:
- MIT: Your Brain on ChatGPT
- Erigo: Bias in AI – Weightings That Shape Our Worldview
- Erigo: Cognitive Integrity – A System Condition in the Information Age
- Erigo: Systems for Engagement Require Structures for Interpretation
- Erigo: Cognitive Integrity and the Silent Reshaping of Our Thinking
Questions that arise when AI does the work for us
We find ourselves in a time when AI often does the mental work for us—formulates, prioritizes, summarizes. But what does it do to our brain's activity, our endurance, and our interpretive capacity? Here we gather some of the most common questions about AI and cognition, supported by current research and reflection.
What is cognitive offloading and why does it matter?
Cognitive offloading means we relocate thought processes to external systems, e.g., AI. It can be efficient, but if it happens too often, we risk losing abilities like interpretation, analysis, and critical thinking—abilities that must be practiced to be preserved.
What does it mean that AI affects our "internal structure"?
AI-generated output is built on weightings and interpretations we don't always see. When we use AI without making these structures visible, our concept formation is affected, as is how we perceive what is reasonable, true, or neutral.
How are dopamine and AI use connected?
Fast, satisfying answers from AI trigger dopamine. This can lead us to prefer shortcuts over deeper thinking, which in the long run reduces our cognitive endurance, especially if friction is avoided in learning.
Can AI and human thinking coexist in a healthy way?
Yes, and it requires conscious use. AI should be seen as support, not a replacement. We need to actively participate in our own cognitive movement to preserve interpretation, reflection, and understanding.
What does research show about how AI affects our brain?
Studies from MIT show that heavy use of AI tools like ChatGPT can reduce neural activity, especially in brain regions linked to memory, creativity, and language ability. Users who didn't reflect themselves exhibited fewer connections between brain regions and poorer results in writing tasks.