Go to main content

We only use essential cookies

Erigo uses strictly necessary cookies to ensure our website works smoothly and securely.

We do not use third-party, marketing, or tracking cookies, nor do we share your data.

Since we don’t profile you, there’s nothing to accept, just enjoy your visit.

Read our cookie policy

Our approach

AI Philosophy

How we think about artificial intelligence, why we build it the way we do, and what we believe responsible AI in learning should look like.

AI is reshaping how we learn, work, and think. At Erigo, we see this transformation as an opportunity to build systems that genuinely support human development. We also see risks that deserve serious attention.

Our philosophy starts with a simple premise: AI should strengthen human capability, never replace it. Every feature we build, every algorithm we deploy, is evaluated against this principle.

Cognitive integrity matters

We believe in protecting the human capacity for independent thought, critical reasoning, and authentic learning. AI should amplify these abilities, not erode them.

Transparency first
When AI is involved, we say so. Users always know when they're interacting with automated systems and what data influences the output.
Human in the loop
AI assists, humans decide. Critical assessments, credentials, and learning paths always involve human judgment. Automation handles the routine so people can focus on what matters.
No manipulation
We reject dark patterns, addictive mechanics, and attention-harvesting techniques. Our AI is designed to serve the learner's goals, not to maximize engagement metrics.
Data minimalism
We collect only what's necessary. Personal data stays in the EU, encrypted at rest and in transit. We never sell user data or use it for purposes beyond the platform.
Explainable outputs
When our AI makes recommendations or assessments, users can understand why. No black boxes where clarity matters.
Continuous review
AI systems drift. We monitor for bias, accuracy, and unintended effects. When something isn't working as intended, we fix it or remove it.

How we think about AI risks

Three concepts guide our assessment of AI's impact on learning and cognition.

Cognitive integrity
The preservation of human capacity for independent thought, critical analysis, and authentic decision-making in the presence of AI systems.
Dopamine logic
Design patterns that exploit neurological reward systems for engagement rather than genuine value. We actively avoid these in our products.
Synthetic safety
The assurance that AI-generated content and interactions are clearly identified and bounded by ethical constraints.

What we refuse to build

Some applications of AI have no place in learning environments. We draw clear lines.

Surveillance-based assessment
We don't use AI to monitor keystrokes, eye movements, or behavior patterns to infer engagement or cheating. Trust and privacy come first.
Emotion recognition
Inferring emotional states from facial expressions or voice is unreliable and invasive. We don't do it.
Predictive scoring of potential
AI should assess demonstrated competency, not predict future capability based on demographics or behavior.
Engagement maximization
Our success metric is learning outcomes, not time on platform. We won't optimize for addiction.

Meet ELSA

Our AI learning assistant, built to embody these principles.

ELSA is our AI learning assistant, built to embody the principles on this page. She helps learners navigate content, answers questions, and provides feedback on assignments.

Clearly identified
ELSA always introduces herself as an AI assistant.
Explains reasoning
She provides explanations for her suggestions and admits when she's uncertain.
Encourages independence
ELSA prompts learners to think critically and explore topics on their own.
Hands off to humans
When complex judgment is needed, ELSA refers learners to human instructors.
No data retention
ELSA does not retain personal data beyond the session.

Our commitment

We comply with EU AI Act requirements and go further. These aren't just regulatory checkboxes. They reflect what we believe responsible AI development looks like.

EU AI Act compliant
We adhere to all relevant regulations, ensuring our AI systems meet stringent safety and transparency standards.
Transparent AI labeling
Users are always informed when they're interacting with AI, along with clear information about its capabilities and limitations.
Regular bias audits
We conduct ongoing assessments to identify and mitigate biases in our AI systems, ensuring fair and equitable treatment for all users.
Data stays in EU
All user data is stored and processed within the European Union, complying with GDPR and prioritizing user privacy.

Declaration of Conformity

We have issued a voluntary Declaration of Conformity for our AI modules in accordance with the EU Artificial Intelligence Act.

The declaration describes how we work with technical documentation, risk classification, and transparency.

Complete technical documentation is available on request for public authorities, supervisory bodies, and customers.

Stay updated

Get news, articles and inspiration straight to your inbox.