We track every new model release. We benchmark, compare, implement. An entire industry is built around measuring what AI can do. The question I have followed is a different one: what happens to our own capacity in the process? I have worked with behavioral data in digital environments for nearly two decades. Since language models entered the market, I have continuously tracked how this new layer affects what we already knew about algorithmic flows, concentration, and interpretive ability. What I write about cognitive integrity and synthetic safety builds on that foundation. Research is now beginning to confirm what we suspected. The findings are subtle, and that is precisely why they rarely receive the attention they deserve: the aggregate effect on human capacity will take time to become visible in the data we measure at a societal level. That is the kind of change that requires us to look actively, before it becomes obvious.
We talk a lot about what AI can do. We talk very little about what AI does to us.
During 2025 and early 2026, a number of reports have been published about what happens to human capacity as AI becomes integrated into everyday life. The findings are subtle, and for that reason harder to take seriously. But they all point in the same direction.
The Diagnosis: We Are Outsourcing More Than We Realize
The NBER report How People Use ChatGPT (2025) tracked 700 million users. One of the most striking findings was the shift in usage patterns, rather than the scale itself. Within a single year, non-work-related conversations increased from 53 to 73 percent of all interactions. People use LLMs to think, reason, process, and make decisions in their private lives.
That is a qualitative change, more than a quantitative one.
Gerlich (2025) studied 666 participants in the United Kingdom and measured the relationship between AI use and critical thinking. The correlation between frequent AI use and cognitive offloading was r = 0.72. The correlation between cognitive offloading and critical thinking was r = -0.75. The relationship was particularly pronounced among younger participants (ages 17–25), who showed both higher AI dependency and lower capacity for critical analysis than older participants. The only protective factor identified was educational level, and more specifically: training in evaluating rather than consuming.
It comes down to whether you actively practice the ability to reason. That is what determines the outcome.
The Consequence: Deskilling Is Already Underway
The term deskilling, the gradual weakening of skills through lack of practice, was traditionally applied to manual labor. What is now being documented concerns cognitive abilities of a different kind.
A study published in The Lancet Gastroenterology & Hepatology (Budzyń et al., 2025) followed experienced physicians using AI-assisted adenoma detection in colonoscopies. Specialists with over 2,000 colonoscopies each achieved a detection rate of 28.4 percent before AI was introduced. After becoming accustomed to AI assistance, their detection rate dropped to 22.4 percent when working without the system. These were experienced clinicians. The ability had eroded during a period of support.
The mechanism is straightforward. The brain is evolutionarily designed to conserve energy. When a system offers answers, the incentive to reason them through independently is reduced. Each time we skip that step, we reduce the training of that capacity. It is a slow challenge to the fundamental logic of neuroplasticity: use it or lose it.
Deskilling manifests on at least three levels:
Linguistic. When AI generates text, we gradually lose the ability to search for the right word, weigh formulations, and hold our own voice. The statistically probable replaces the personally precise.
Analytical. AI summaries cause us to skip the steps of reading full texts, holding perspectives in mind, and synthesizing conclusions. Working memory training and synthesis capacity atrophy.
Logical. Just as GPS weakened spatial navigation and the calculator weakened mental arithmetic, AI reasoning risks weakening our ability to build our own chains of argument. What was competence becomes dependence.
The Systemic: Who Is Talking About This?
In 2024, bots generated more internet traffic than humans, for the first time in over a decade. Europol's report that same year indicated that 90 percent of internet content may be synthetically generated by 2026. We are moving toward an information environment where the majority of what we encounter online was produced by a system.
What determines the consequence is what we do with it.
The problem arises when we stop distinguishing, when we lack the capacity to evaluate origin, interpret context, and hold perspective alive under the pressure of information flow. That capacity, cognitive integrity, is precisely what risks thinning out without conscious maintenance.
Andrew Yang describes 2026 as "the Fuckening" for white-collar work, a massive structural shift whose consequences are still rolling out beneath the surface of the statistics. His description contains exaggerations, but the core observation holds: we are in a structural shift. The infrastructure of office and knowledge work is being rewritten, and it remains unclear what replaces the cognitive training loops that infrastructure unintentionally created.
What Is Required of Us: Architecture, Deliberate and Built
Cognitive integrity is an argument for understanding what AI tools do, and for building structures that compensate for what they take.
That requires three things at different levels:
The individual needs deliberate practice in the capacities being outsourced most rapidly: critical reading, independent writing, reasoning without support, tolerance for open questions. Maintenance of the cognitive infrastructure, as self-evident as physical training.
The organization needs to distinguish between AI as a productivity tool and AI as a basis for decisions. Synthetic safety, the feeling of security provided by an AI system without the foundation for that security having been examined, is an operational risk. Decisions made on probabilistic logic are more predictable and less innovative than decisions grounded in genuine reasoning.
Society needs competence infrastructure that measures and maintains cognitive capacity, alongside productivity metrics. What we achieve with AI support is a different question from what we carry as ability. That is the distinction we still lack the language and measurement tools for.
The reports show that we are adapting to AI in ways we have not yet reflected on sufficiently. And that this adaptation carries a cost that is difficult to see in real time, precisely because it happens gradually, quietly, and without a clear breaking point.
That is exactly the kind of change that requires us to look actively.
Sources
The empirical claims in this article draw on the following reports and studies:
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. A survey study with 666 participants in the United Kingdom measuring the relationship between frequent AI use, cognitive offloading, and critical thinking. The study identifies educational level as the only protective factor against deteriorating analytical capacity.
Budzyń, M. et al. (2025). The Lancet Gastroenterology & Hepatology. A clinical study following experienced colonoscopy specialists before and after the introduction of AI-assisted adenoma detection. Shows that detection capacity deteriorated in experienced physicians after habituation to AI support, providing empirical evidence of deskilling in professional medical practice.
Toner-Rodgers, A. et al. (2025). How People Use ChatGPT. NBER Working Paper 34255. A large-scale mapping of 700 million users documenting the qualitative shift from work-related to private use of LLMs during 2024–2025.
Europol (2024). Tech Watch Flash: ChatGPT: The Impact of Large Language Models on Law Enforcement. Europol's report on AI-generated content and its consequences for the information environment, with the projection that 90 percent of internet content may be synthetically generated by 2026.
Further Reading
This is a topic I have followed continuously since language models entered the market, drawing on nearly two decades of work with behavioral data in digital environments. The articles below deepen the themes introduced here. Each description tells you what you will actually find if you follow the link.
When our thinking moves into language models: An analysis of the NBER report and what the shift toward private use means for our cognitive autonomy. Read this if you want to understand what it means that three quarters of all LLM conversations now happen outside of work, and why that is a cognitive question as much as a technical one.
Language models as external memory: what happens to our internal capacity?: A review of the research on cognitive offloading, from Sparrow et al.'s Google effect to MIT Media Lab's EEG data on brain activity during LLM-assisted writing. Read this if you want the empirical basis for what happens neurologically when we let systems remember and reason for us.
When the brain is shaped by the system, and what changes faster than we think: An article on neuroplasticity and how system design, algorithmic rhythm, and LLM interaction shape brain structures in ways we rarely notice in the moment. Read this if you want to understand the mechanism behind deskilling at a biological level.
When support becomes steering: cognitive risk management, deskilling and automation bias: An examination of how AI assistance shifts from aid to direction, and what automation bias does to professional judgment over time. Read this if you want to understand the organizational and decision-making consequences of the deskilling dynamic described in this article.
Cognitive integrity: a systemic requirement in the information age: An introduction to cognitive integrity as a framework for understanding and counteracting capacity erosion in AI-dense environments. Read this if you want the conceptual language for what this article points toward, and why cognitive capacity needs to be treated as a systemic question rather than an individual responsibility.