Cognitive Integrity
The missing layer of developer ethics
A developer builds an AI assistant for financial compliance.
At first, the system works smoothly. It cites regulations, answers questions, and feels reliable.
Read more
Who owns the truth in Large Language Models?
Large language models are rapidly becoming the interfaces through which knowledge is accessed, shaped, and distributed.
If only a handful of companies own these models, they also hold the power to define what appears as truth in our digital discourse.
Read more
Synthetic safety and AI
When confirmation replaces inquiry
Generative AI gives us answers quickly, politely, and often with a high degree of familiarity. But what happens when our brain interprets this as confirmation, and when confirmation begins to replace inquiry? As more people turn to AI instead of human dialogue, the conditions for how we train empathy, develop judgment, and build knowledge are changing. The sense of safety that emerges is not necessarily wrong, but it is synthetic.
Read more
When AI writes the world
The risks of next‑generation models trained on their own mirror image
The technical development within generative AI has reached a point where large language models not only consume information, but produce the majority of new text on the internet. These systems also rely on training data to develop their language capabilities. This creates a structural tension:
Read more
Stay updated
Get news, articles and inspiration straight to your inbox.