🍪 Vi använder endast nödvändiga cookies för optimal upplevelse.

Erigo använder endast nödvändiga cookies för att säkerställa att vår webbplats fungerar optimalt. Vår chattfunktionalitet som är en tredjepartstjänst inom EU sätter en cookie enbart för att tjänsten ska fungera och ingen personlig information sparas.

Vi använder inte andra tredjeparts- marknadsföringscookies eller cookies som delar din data med andra.

Därför finns det inga cookieinställningnar att godkänna eftersom vi inte profilerar dig.

Gå till innehållet

What is Truth in the LLM Era

And how to use Language Models without losing competence

Large Language Models (LLMs) have rapidly become part of the daily work of many decision-makers, analysts, and knowledge professionals. Their ability to generate coherent, contextually adapted responses makes them powerful tools. At the same time, they are reshaping the way we seek, interpret, and apply knowledge.

The question is no longer whether a response is correct. It is about how we ensure that the answers are reliable, relevant, and anchored in subject-matter expertise, and how we avoid eroding competence as the model performs more of the work.

Probability and truth are different concepts

When a language model responds to a question, it is not determining what is true. It calculates the probability of which sequence of words is most likely to follow, based on the data it has been trained on and the context of the dialogue.

The model’s reasoning is statistical, not verifying. It produces text that resembles human answers to similar questions in its training data, without understanding or validating the factual accuracy of the content.

This means:

  • Two identical questions may produce different answers depending on the model, its version, and its settings.
  • The content of the answer depends on patterns in the training data, including bias, cultural norms, and gaps in representation.
  • The context in the dialogue can guide the model toward certain formulations or perspectives, even when these are not empirically correct.

Example: If a model has been trained on extensive data from a specific geographic region, it may highlight solutions, perspectives, or values common in that region, even if the question relates to a different context.

Further reading, in Swedish at the moment: What is an LLM and What It Is Not: A Primer for Understanding AI Beyond the Buzzwords


Bias and cultural influence

All language models are shaped by the data they are trained on. This data contains cultural values, norms, and historical imbalances. These patterns are reproduced in the answers and can be amplified depending on how the model is constructed and fine-tuned.

A 2024 Stanford study shows that the same question can generate systematically different answers depending on the model, even when the prompt is identical. This is particularly evident in politically sensitive or culturally loaded questions.

Bias can arise from:

  1. Data collection where certain sources, languages, or perspectives dominate
  2. Model fine-tuning that reflects the worldview of its developers
  3. Prompt interpretation where the wording steers the model toward certain types of answers

Further reading, in Swedish at the moment: Bias in AI: Weightings that shape our view of the world


Affirmative language is not proof of truth

Language models are trained to produce responses that appear relevant, coherent, and linguistically correct. The affirmative tone is part of the model’s design, as it creates the impression that the question has been well answered. This, however, is not proof that the content is true or accurate.

"What you feel is true, and what your language model convinces you is true, does not mean you are engaged in a dialogue that refers to an overarching truth."

To manage this, the user should:

  • Ask the model to provide and link to sources.
  • Request alternative perspectives or counterarguments.
  • Verify the information against independent and trusted sources.

A critical approach means treating the model’s output as a starting point for further analysis, not as a final answer.


How to test your own competence and understanding

To avoid becoming a passive recipient of the model’s output, you need to actively test and develop your own understanding:

  • Recreate the reasoning yourself: Rewrite the answer in your own words and see if it still holds logically.
  • Formulate control questions: Ask follow-up questions that force the model to specify, exemplify, or problematize.
  • Search for contradictory information: Actively look for data or arguments that challenge the model’s answer.
  • Discuss in a team: Have multiple people analyse the same output and compare interpretations.

This strengthens both individual source criticism and the organisation’s collective decision-making capacity.


Risks of incorrect use

When language models are used without the necessary competence, the following risks can arise:

  • Decision bias: Decisions are based on probability-generated text presented as fact.
  • Competence drift: Organisations gradually lose the ability to analyse and reason independently.
  • Epistemic erosion: The line between verified knowledge and generated text becomes blurred.
  • Overconfidence in output: Well-formulated answers are perceived as more accurate than they are.

The OECD AI Principles (2019) and the EU AI Act (2024) emphasise the need for human control and accountability when AI is used in matters that affect people and society.


Why decision-makers need to understand model structure

A language model is always shaped by bias, reinforcement, and cultural context from its training data and fine-tuning. These factors influence which perspectives are highlighted and which are omitted.

In decision-making, it is essential to understand this structure. Treating the model’s output as objective risks basing decisions on material that is statistically probable but not aligned with the actual context.

An LLM can provide valuable support by analysing, structuring, and inspiring, but support is not the same as a decision. Decisions require human judgement, source criticism, and awareness that the model reflects the context it was shaped by.


Competence required to interpret answers

To use language models as decision support, users need to develop competence in:

  • Model understanding: Knowledge of the model’s training, data sources, and fine-tuning.
  • Source criticism: Cross-checking with verified, independent sources.
  • Cultural reading: The ability to identify which perspectives are present and which are absent in the answer.
  • Cognitive endurance: The ability to analyse reasoning without handing over the entire process to the model.

Further reading, In Swedish at the moment: When AI Paves the Way: Reflections on the shift we are in


Recommendations for responsible use

Individual users

  1. Validate critical information against external, independent sources.
  2. Identify potential bias in the answers.
  3. Understand which model and version is being used, including its training and fine-tuning.
  4. Develop your own ability to analyse and interpret.

Organisations

  1. Establish clear internal guidelines for how and when LLMs are used.
  2. Conduct training on both the technical aspects and the critical interpretation of AI output.
  3. Document how models are used in decision-making processes.
  4. Combine AI analyses with expert review.
  5. Monitor how AI use affects decision-making capacity and competence levels over time.

Navigating truth in the LLM era

From probability to sustainable decision-making

To use language models as an asset in decision-making, we need to understand their fundamentals, acknowledge their limitations, and actively work to interpret and validate their output. Allowing probability to replace the examination of factual conditions risks weakening the competence that makes decisions sustainable in the long term.

In the LLM era, the real competitive advantage does not lie in access to AI but in the ability to combine the model’s computational power with human cognitive integrity, cultural understanding, and expert judgement. This is the foundation for decision-making that remains steady even as technology continues to evolve.


References and further reading

  • Stanford University (2024). AI Index Report.
    Comprehensive annual report showing, among other things, how answers from different LLMs vary depending on the model, version, and settings.
    https://aiindex.stanford.edu

  • OECD (2019). OECD Principles on Artificial Intelligence.
    International framework for responsible AI, focusing on transparency, human control, and robustness.
    https://oecd.ai/en/ai-principles

  • European Union (2024). AI Act.
    EU legislation regulating the use of AI, with requirements for risk management, transparency, and human oversight.
    https://artificialintelligenceact.eu

  • UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence.
    Global recommendation emphasising cultural diversity, fairness, and source criticism in AI use.
    https://unesdoc.unesco.org/ark:/48223/pf0000381137

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots.
    Academic paper describing how language models can produce convincing yet incorrect or biased information.
    https://dl.acm.org/doi/10.1145/3442188.3445922

Följ Erigo på LinkedIn

En del av Sveriges infrastruktur för kompetensutveckling.
Följ oss på LinkedIn