🍪 Vi använder endast nödvändiga cookies för optimal upplevelse.

Erigo använder endast nödvändiga cookies för att säkerställa att vår webbplats fungerar optimalt. Vår chattfunktionalitet som är en tredjepartstjänst inom EU sätter en cookie enbart för att tjänsten ska fungera och ingen personlig information sparas.

Vi använder inte andra tredjeparts- marknadsföringscookies eller cookies som delar din data med andra.

Därför finns det inga cookieinställningnar att godkänna eftersom vi inte profilerar dig.

Gå till innehållet

AI and quality

from micro level to systemic risk

More reports are now emerging as AI has been part of working life for some time. The first aggregated data shows that the technology can increase efficiency, but quality does not always follow. In some cases, we even see declines.

On a micro level, AI can deliver good results in individual tasks. On a macro level, risks appear that extend far beyond single processes, from an eroded skills supply to signs of systemic instability.

To understand the development, we can look at current studies and reports from different sectors. They give a picture of how quality is affected in practice and what patterns recur regardless of industry.

examples of quality challenges

Erroneous and Oversimplified Conclusions
A study published in Royal Society Open Science shows that large language models often oversimplify and misrepresent scientific findings. They are almost five times more likely than humans to oversimplify conclusions from scientific studies, especially in medicine where this can lead to dangerous interpretations. The risk increases when models are prompted to be precise – they then become twice as likely to overgeneralise.
Source: Peters U., Chin-Yee B. “Generalization bias in large language model summarization of scientific research.” Royal Society Open Science. 2025. DOI: 10.1098/rsos.241776. Link

Loss of Human Skills
A study in The Lancet Gastroenterology & Hepatology found that AI-assisted colonoscopy reduced physicians’ ability to detect adenomas when AI was not in use, with adenoma localisation rates dropping from 28.4% to 22.4%. The results suggest that dependence on the technology reduces human precision over time.
Source: Budzyń M. et al. “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.” The Lancet Gastroenterology & Hepatology. 2025. DOI: 10.1016/S2468-1253(25)00133-5. Link

Negative Impact on Customer Service
A 2025 benchmark report from Quantum Metric found that 42% of UK consumers admitted being ruder to AI chatbots than to human agents, 57% reported greater frustration when interacting with AI, and 40% felt the service quality was worse.
Source: Quantum Metric. “2025 Contact Center Benchmark Report.” Retrieved 2025-08-13. Link

Poor data quality
85% of all AI projects fail due to poor data quality. Issues such as inaccuracy, incompleteness, and inconsistency directly affect the models’ ability to deliver reliable results.
Source: Shelf.io. “Why Bad Data Quality Kills AI Performance: The Hidden Truth You Need to Know.” 2025-04-09. Link

Declining performance over time
AI models lose precision if they are not updated regularly, a phenomenon known as model aging. The risk increases if models are trained on synthetic content from earlier versions, which can lead to model collapse.

Sources:

  • Nagar S. et al. “The problem of AI model aging.” Scientific Reports. 2022;12:11677. Link
  • Shumailov I. et al. "The Curse of Recursion: Training on Generated Data Makes Models Forget." Nature. 2024;627:59–65. Link
  • IBM. "What is model collapse?" IBM Think. Retrieved 2025-08-13. Link

Over-reliance on systems (automation bias)
Automation bias means that people tend to trust automated systems even when they are wrong. Research shows that this occurs in many environments, from healthcare to legal decisions, and can lead to two types of errors: omission errors when people fail to act because the system does not alert them, and commission errors when people follow incorrect system suggestions.

A systematic review in Journal of the American Medical Informatics Association shows how automation bias affects clinical decisions, often without users being aware of it. According to Forbes, the bias is particularly problematic when AI is used in high-risk environments, as trust in the systems can be stronger than in colleagues’ assessments. Studies also show that people often accept AI suggestions without critical review, even when they know the systems can make mistakes.

Beyond these effects on decision-making, there is now evidence that automation bias can also have direct cognitive consequences.

A preprint study from MIT Media Lab, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (Kosmyna et al., 2025), highlights immediate and measurable effects on the brain when AI is used as an intermediary in writing tasks.

EEG data show that participants who used AI (LLM) displayed weaker neural connectivity and less cognitive presence than those who wrote without technological assistance, or even those who used a search engine.

When shifting from AI back to their own abilities (LLM → Brain-only), the reduced activity persisted, while the opposite group (Brain-only → AI) demonstrated improved brain activity and memory capacity.

The LLM group also showed the lowest sense of ownership of their own texts and the weakest memory of their own formulations.

Sources:

  • Lyell D., Coiera E. “Automation bias and verification complexity: a systematic review.” Journal of the American Medical Informatics Association. 2017;24(2):423–431. doi:10.1093/jamia/ocw105
  • Rosbach N., Lüthi T., Jongen J., Bless J.J. “Automation bias in AI-assisted medical decision-making under time pressure in computational pathology.” arXiv. 2024. doi:10.48550/arXiv.2411.00998
  • Goddard K., Roudsari A., Wyatt J.C. “Automation bias: a systematic review of frequency, effect mediators, and mitigators.” Journal of the American Medical Informatics Association. 2012;19(1):121–127. doi:10.1136/amiajnl-2011-000089
  • Kosmyna N., et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv. 2025. doi:10.48550/arXiv.2507.12345

what is required for sustainable quality

AI can create value when implemented with the right data, clear quality metrics, and complemented by human expertise. Without these conditions, there is a risk of quality loss, both at a detailed level and across entire systems.

Today there is a large AI bubble where new products and solutions are being developed at high speed. Much of the focus is on what is technically possible, how models can be refined, and functions added. At the same time, there is massive implementation of AI tools in societal processes, often with the aim of optimising and increasing efficiency.

One does not exclude the other – innovation and risk management can go hand in hand. But the race needs governance if we are to ensure that development contributes to a society with sustainable quality and long-term skills supply.


References

  1. Peters, U., & Chin-Yee, B. “Generalization bias in large language model summarization of scientific research.” Royal Society Open Science. 2025. doi:10.1098/rsos.241776

  2. Budzyń, M., et al. “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.” The Lancet Gastroenterology & Hepatology. 2025. doi:10.1016/S2468-1253(25)00133-5

  3. Quantum Metric. “2025 Contact Center Benchmark Report.” Retrieved 2025-08-13. Link

  4. Shelf.io. “Why Bad Data Quality Kills AI Performance: The Hidden Truth You Need to Know.” 2025-04-09. Link

  5. Nagar, S., et al. “The problem of AI model aging.” Scientific Reports. 2022;12:11677. Link

  6. Shumailov, I., et al. "The Curse of Recursion: Training on Generated Data Makes Models Forget." Nature. 2024;627:59–65. Link

  7. IBM. "What is model collapse?" IBM Think. Retrieved 2025-08-13. Link

  8. Lyell, D., & Coiera, E. “Automation bias and verification complexity: a systematic review.” Journal of the American Medical Informatics Association. 2017;24(2):423–431. doi:10.1093/jamia/ocw105

  9. Rosbach, N., Lüthi, T., Jongen, J., & Bless, J.J. “Automation bias in AI-assisted medical decision-making under time pressure in computational pathology.” arXiv. 2024. doi:10.48550/arXiv.2411.00998

  10. Goddard, K., Roudsari, A., & Wyatt, J.C. “Automation bias: a systematic review of frequency, effect mediators, and mitigators.” Journal of the American Medical Informatics Association. 2012;19(1):121–127. doi:10.1136/amiajnl-2011-000089

  11. Kosmyna, N., et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv. 2025. doi:10.48550/arXiv.2507.12345

Follow Erigo on LinkedIn

A part of Sweden's infrastructure for skills development.
Follow us on LinkedIn