| TL;DR | The tendency of LLMs to reproduce dominant, statistically probable narratives — flattening the interpretive diversity that qualitative research depends on. |
|---|
What it means
LLMs are trained to generate the most probable continuation of a text sequence. This structural feature means they are systematically biased toward dominant patterns in their training data. When used for qualitative analysis, this creates a specific risk: the AI will identify and foreground themes that are common, well-represented, and culturally dominant — while suppressing marginal, context-specific, or counter-hegemonic meanings.
(brailas-ai-qualitative-research-2025) calls this “epistemic flattening” — the AI doesn’t just fail to find certain meanings, it actively makes them less visible by organizing the data around what is statistically most likely.
Why this matters more than reliability
The usual metrics for evaluating AI-assisted qualitative research — intercoder-agreement, precision, Jaccard index — measure consistency and accuracy against a reference standard. They cannot detect epistemic flattening because:
- The reference standard itself may reflect dominant perspectives
- High agreement between AI and human coders can still systematically miss minority voices
- The problem is not that the AI is wrong — it’s that it’s limited to what’s already known
Qualitative research’s distinctive value is discovering unexpected patterns, surfacing contradictions, and representing perspectives that aren’t dominant. Epistemic flattening directly threatens this.
The alternative
(brailas-ai-qualitative-research-2025) proposes using AI abductively — asking it to surface contradictions, silences, and departures from expected patterns rather than confirming what is probable. (anis-french-ai-qualitative-research-2023) makes a related point via the “explicatory” argument: AI failures (algorithmic cases that don’t fit the coding scheme) are analytically valuable precisely because they flag the unusual.
(carlsen-ralund-computational-grounded-theory-2022)'s CALM framework addresses this at the methodological level: human immersion in the data — not model-selected paradigmatic cases — is what qualifies a researcher to interpret meaning within a specific community.
See also
- brailas-ai-qualitative-research-2025 — source of the concept
- llm-qualitative-research — the broader context
- computational-grounded-theory — a methodological framework that partially addresses this risk
- ai-research-ethics — the political and ethical stakes of whose voices get flattened
- contested-claims — whether AI systematically misrepresents marginalized voices (Claim 5)
- validity-trustworthiness — epistemic flattening as a validity failure invisible to reliability metrics
- qualitative-ai-methods — abductive AI use as a response to flattening
- epistemology — critical epistemological tradition and its AI implications
- dellafiore-et-al-2025-expert-interviews — practitioner-level articulation: the “illusion of meaning” is what epistemic flattening looks like from inside a qualitative research practice; outputs appear interpretively valid but are algorithmically derived
What links here
- AI Research Ethics
- Anis & French (2023) — Efficient, Explicatory, and Equitable: Why Qualitative Researchers Should Embrace AI, but Cautiously
- Bennis & Mouwafaq (2025) — Advancing AI-Driven Thematic Analysis: A Comparative Study of Nine Generative Models
- Brailas (2025) — AI in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine
- Carlsen & Ralund (2022) — Computational Grounded Theory Revisited: From Computer-Led to Computer-Assisted Text Analysis
- Chatzichristos (2025) — Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?
- Computational Grounded Theory
- Contested Claims
- Costa et al. (2025) — AI as a Co-researcher in the Qualitative Research Workflow: Transforming Human-AI Collaboration
- Dahal (2024) — How Can Generative AI Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal
- Dellafiore et al. (2025) — Artificial Intelligence in Qualitative Research: Insights From Experts via Reflexive Thematic Analysis
- Empirical Findings
- Epistemology — Stances Across the Literature
- Friese (2026) — From Coding to Conversation: A New Methodological Framework for AI-Assisted Qualitative Analysis
- AI in Qualitative Research
- Human-AI Collaboration — Frameworks and Models
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- LLMs for Qualitative Research
- Montrosse-Moorhead (2023) — Evaluation Criteria for Artificial Intelligence
- Nelson (2020) — Computational Grounded Theory: A Methodological Framework
- Nguyen-Trung & Nguyen (2026) — Narrative-Integrated Thematic Analysis (NITA)
- Nicmanis & Spurrier (2025) — Getting Started with AI-Assisted Qualitative Analysis: An Introductory Guide
- Paulus & Marone (2024) — "In Minutes Instead of Weeks": Discursive Constructions of Generative AI and Qualitative Data Analysis
- Prompt Engineering
- Qualitative AI Methods — A Living Taxonomy
- Sakaguchi et al. (2025) — Evaluating ChatGPT in Qualitative Thematic Analysis in the Japanese Clinical Context
- Übellacker (2024) — AcademiaOS: Automating Grounded Theory Development with Large Language Models
- Validity and Trustworthiness
- Wheeler (2026) — Technological Reflexivity in Practice: How MAXQDA, NVivo, and ChatGPT Shape Qualitative Survey Analysis
- Williams (2024) — Paradigm Shifts: Exploring AI's Influence on Qualitative Inquiry and Analysis
- Wise et al. (2026) — Why AI is Not the Enemy: Trustworthy AI-in-the-Loop Analysis
- Xu (2026) — Doing Thematic Analysis in the Age of Generative AI: Practices, Ethics and Reflexivity