Generative AI gives us answers quickly, politely, and often with a high degree of familiarity. But what happens when our brain interprets this as confirmation, and when confirmation begins to replace inquiry? As more people turn to AI instead of human dialogue, the conditions for how we train empathy, develop judgment, and build knowledge are changing. The sense of safety that emerges is not necessarily wrong, but it is synthetic.
On the concept of Synthetic Safety
What is meant by synthetic safety?
I use the term synthetic safety to describe the feeling of cognitive stability that arises when we receive recognition or confirmation from AI-generated language, without having undergone any actual testing.
The safety feels real because it activates the brain’s reward system, but it is not the result of processing, dialogue, or critical feedback.
It is a form of safety shaped in relation to probability, politeness, and familiarity, rather than truth, resistance, and human interpretation.
(The term is introduced here as an analytical framework for understanding a new type of cognitive experience in AI environments.)
The human brain is biologically wired to seek confirmation. The dopamine system is activated by recognition, appreciation, and responses that reinforce one’s own perspective. It is not the result itself that triggers the system, but the signal that a reward is near.
“Dopamine does not signal pleasure directly. Instead, it amplifies the motivational significance of cues that predict reward.”
(Berridge & Robinson, 1998)
This mechanism explains why a linguistic confirmation from an AI model can feel satisfying, even when no human is involved. When language models express themselves in a way that signals agreement, the reward system is triggered and the feeling of being right is reinforced, regardless of whether the content is accurate.
This creates a synthetic safety. It does not arise from knowledge or inquiry, but from a signal of recognition that the system interprets as a reward. Over time, this affects our cognitive orientation: we begin to seek answers that feel right, rather than those that require effort.
Empathy as a skill is trained in human interaction
Empathy develops in real-time situations where we must read uncertainty, nuance, and social feedback. Human interaction offers variation, resistance, and negotiation, creating a dynamic training space for both emotional and cognitive empathy.
As more people choose to interact with AI models in situations that previously required human contact, the conditions for this training change. AI provides quick, clear answers without emotional friction. The model demands no feedback, no emotional labor, and no shared space for interpretation.
“Empathy is not fixed. It develops through repeated engagement with complex emotional and social information.”
(Zaki & Ochsner, 2012)
Empathy erodes when we are exposed to fewer situations in which it is trained. As interaction is simplified, so is our way of perceiving others.
Friction is a prerequisite for knowledge development
Knowledge evolves when ideas are tested, rejected, and reformulated. This kind of friction is central to both scientific method and human development. In human conversations, friction is built in – through objections, requests for clarification, or a shifting context. It forces metacognition.
Language models are not trained to create friction. They are optimized to produce likely, user-adapted responses. The result is an environment where statements are rarely challenged, and where the user, instead of being confronted, often sees their own thought reflected back with new phrasing.
“We expect more from technology and less from each other.”
(Turkle, 2011)
As expectations of humans decrease and the availability of technology increases, our understanding of knowledge shifts. What we need in terms of cognitive testing is replaced by what we prefer in terms of confirmation.
When safety replaces inquiry, our relationship to thinking changes
If language models become our primary conversational partners, a shift occurs in how we build knowledge, train empathy, and evaluate reasoning. The logic of confirmation creates a feeling of being on the right track, even when resistance, complexity, or human feedback is absent.
This changes not only how we think, but also how we perceive our own thinking ability. When every input idea generates a well-phrased response, we begin to believe that our contribution is complete at the point of formulation. We stop revising, questioning, and building.
A society that wants to strengthen cognitive resilience needs structures that not only provide answers but also help us reshape our questions.
"If synthetic safety becomes a new constant, we must simultaneously protect what trains our empathetic ability. Responsibility is about maintaining both."
(Katri Lindgren, 2025)
Related perspective
For an analysis of how language models affect their own training and what that means for knowledge production:
When AI writes the world: The risks of the next generation of models being trained on their own mirror Image
References
1. Berridge, K. C., & Robinson, T. E. (1998).
The Role of Dopamine in Reward: From Aversive to Appetitive Learning and Motivation.
Explains how dopamine is activated by reward expectation, which supports the understanding of why confirmation from AI affects our cognition.
2. Zaki, J., & Ochsner, K. N. (2012).
The neuroscience of empathy: progress, pitfalls and promise.
A review of how empathy develops and why interaction is central to the process.
3. Turkle, S. (2011).
Alone Together: Why We Expect More from Technology and Less from Each Other.
An analysis of how technology replaces human relationships and what psychological effects this has.
The image is AI-generated, based on a photograph of myself. It visualizes the threshold between human and system explored in the text.