Our brains are currently being reformatted. I have previously written about cognitive integrity – a systemic requirement in the information age and about synthetic safety and AI – when confirmation replaces inquiry. This article continues that series and explores how dopamine-driven digital tools, language models, and the uneven distribution of metacognitive ability interact to reshape our capacity to think. The purpose is to make these processes visible so that we can work toward sustainable brains in a time of rapid systemic shifts.
The pressure of dopamine logic
Digital tools and social media reward rapid responses. Notifications, likes, and constantly updated feeds create a dopamine logic that reshapes our attention patterns. Research shows that high screen time and multitasking can impair working memory, reduce attentional control, and lower cognitive endurance. When the brain is repeatedly trained through fast reward loops, deep and deliberate thought becomes harder to sustain, not just individually, but in how entire teams and societies process information.
Language models as a new cognitive layer
On top of this environment, language models add a new layer. LLMs provide answers faster than we can fully articulate our own questions. A study from MIT (2025) using EEG data shows that brain activity decreases when users rely on AI to assist in writing tasks, particularly in frontal regions tied to executive function. This reduction correlated with lower memory retention and decreased originality. In practice, this is cognitive offloading: outsourcing not only information, but the durability of reasoning itself. The effect, however, varies depending on the nature of the task and how the user engages with the system.
Synthetic safety and empathy
More and more people choose AI as a conversational partner. This can be described as synthetic safety – a sense of emotional affirmation that arises when algorithms mirror and validate our thinking. Research in human-computer interaction shows that people often attribute empathy and intention to rule-based systems (Nass & Moon, 2000; Bickmore & Picard, 2005). When safety in system dialogue outweighs the safety we feel in human relationships, our empathy may begin to shift. Relationships with other people, full of nuance, friction, and unpredictability, may seem less attractive than the clean, instant responses of a model.
Metacognition as a bottleneck
Metacognitive ability, the capacity to observe, regulate, and adjust one’s own thinking, remains underdeveloped in many. Only a small fraction consistently applies it at an advanced level. As dopamine logic and language models simultaneously accelerate and compress cognition, metacognition becomes a critical bottleneck. The question is how we can strengthen this ability so that slow and complex thinking can still thrive. This includes building feedback loops, resisting frictionless automation, and designing environments that invite reflection and inquiry.
A reformatting in progress
This reformatting of our thinking is already underway. It is not a future risk but a present transformation. We must learn to detect when our thinking is being shortened, create space for slow reasoning, and design systems that foster metacognitive endurance. Language models can serve as sparring partners, but not substitutes, for the reasoning process itself. The challenge ahead is to preserve our capacity for complexity in a time when speed and confirmation are rewarded, and to defend what cannot be automated.
Further reading, articles on Erigo
- When support becomes steering – and what it does to our ability to see
- When the brain is shaped by the system – and what changes faster than we think
- Metacognitive reflections
Primary sources
- Rosen, L.D., Lim, A.F., Carrier, L.M., & Cheever, N.A. (2013). Multitasking and cognitive control: Impacts of high screen-time on attention. Computers in Human Behavior, 29(3), 948–958.
- Uncapher, M.R., & Wagner, A.D. (2018). Media multitasking and memory: Differences in working memory and sustained attention. Psychological Review, 125(2), 196–210.
- Draganski, B. et al. (2004). Neuroplasticity: Changes in grey matter induced by training. Nature, 427, 311–312.
- Schraw, G., & Dennison, R.S. (1994). Assessing metacognitive awareness. Journal of Educational Psychology, 86(4), 473–483.
- Efklides, A. (2008). Metacognition: Defining its facets and levels of functioning in learning contexts. Metacognition and Learning, 3, 3–14.
- MIT (2025). AI use and reduced brain activity in writing tasks. arXiv:2506.12605.
- Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103.
- Bickmore, T. & Picard, R. (2005). Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction, 12(2), 293–327.