| TL;DR | The 43 sources in this corpus span at least five distinct epistemological traditions — post-positivist, interpretivist/constructionist, critical, pragmatist, and post-humanist — and the AI methods each tradition endorses are radically different. The small-q / Big-Q distinction is the fault line that organizes the most heated debates, now contested by a growing body of scholarship drawing on assemblage theory, distributed cognition, and posthumanist frameworks to argue that "exclusively human" meaning-making is itself an epistemological position, not a methodological given. |
|---|
Why epistemology matters for AI methods
The same tool — ChatGPT, Copilot, an open-source LLM — performs very different functions depending on the epistemological framework that surrounds it. In a post-positivist study, AI is a reliable coder whose output is measured by κ and compared to a human standard. In an interpretivist study, AI is a thinking partner whose output is evaluated by how well it disrupts the researcher’s assumptions. In a critical study, AI is a potential reproducer of dominant narratives that must be actively resisted.
nicmanis-spurrier-ai-guide-2025 makes this explicit: the first decision in any AI-assisted research project should be to identify the research’s epistemological commitments, then map those commitments to appropriate AI methods. Getting the method right requires getting the epistemology right first.
The failure to do this is what chatzichristos-ai-positivism-2025 calls “positivism creep” — researchers in interpretivist traditions uncritically adopting AI methods designed for post-positivist measurement, without examining the epistemological mismatch.
The four main stances
Post-positivist
Core assumption: Reality exists and can be measured, but measurement is fallible and probabilistic. Reliability and validity are achievable through systematic method.
AI role: Coder, classifier, pattern detector. AI is a measurement instrument evaluated by reliability metrics (κ, Jaccard). The goal is scaling systematic analysis while maintaining measurement quality.
Representative sources:
- bijker-chatgpt-qca-2024 — GPT-3.5 Turbo as coder; κ as the criterion of success
- bennis-ai-thematic-analysis-2025 — nine models benchmarked; Jaccard = 1.00 as the achievement
- prescott-ai-thematic-analysis-2024 — reliability and speed as the primary outcomes
- salazar-gpt4-qualitative-2025 — GPT-4 evaluated against expert human coding
- montrosse-moorhead-ai-evaluation-2023 — Teasdale’s criteria framework; systematic criteria for AI evaluation
Hallmark: Quantified intercoder agreement metrics appear prominently. AI is treated as a measurement tool whose precision can be evaluated. “Reliability” is the operative concept. See intercoder-agreement.
Risks in this corpus: Reliability without validity (validity-trustworthiness); treating high κ as license to skip human verification; missing low-frequency codes that are analytically significant.
Interpretivist / Constructionist
Core assumption: Reality is constructed through meaning. Knowledge is context-dependent, perspectival, and produced in interaction between researcher and data. Qualitative research’s distinctive value is its capacity to recover meaning that quantitative approaches miss.
AI role: Assistant at best; threat to interpretive validity at worst. AI can handle mechanical tasks (transcription, organization, initial theme sorting) but cannot perform the meaning-making that qualitative research requires.
Representative sources:
- brailas-ai-qualitative-research-2025 — relational/constructionist; AI as heuristic partner, not replacement; meaning as co-constructed
- xu-ai-thematic-analysis-2026 — posthumanist constructionism; reflexive TA practiced alongside ChatGPT
- friese-caai-framework-2026 — hermeneutic; AI as dialogic partner; coding replaced by structured dialogue
- carlsen-ralund-computational-grounded-theory-2022 — sociological interpretivism; human “qualified understanding” as non-negotiable
- chatzichristos-ai-positivism-2025 — concern that AI adoption is importing positivist epistemology into interpretivist disciplines
- wheeler-technological-reflexivity-2026 — distributed reflexivity; technological reflexivity as interpretive practice
- wise-et-al-2026-ai-not-the-enemy — Guba & Lincoln relativist paradigm; argues interpretivist commitments can be operationalized through LLM architectural properties rather than despite them; most technically grounded interpretivist case for AI
- jowsey-et-al-2025-we-reject — maximalist interpretivist/humanist position; GenAI categorically incompatible with Big-Q reflexive research; meaning-making is irreducibly human
Hallmark: Reflexivity, positionality, and the researcher’s relationship to data are foregrounded. “Validity” is replaced or supplemented by trustworthiness criteria (credibility, transferability, dependability, confirmability). The researcher’s interpretive competence — their immersion in the field, their theoretical background, their analytic judgment — is treated as irreducible.
Risks in this corpus: Reactionary AI rejection that ignores genuine efficiency gains; failure to distinguish between AI as threat to interpretation and AI as threat to bad methodological practice; unwarranted confidence in human-only coding.
Critical
Core assumption: Knowledge is produced within relations of power. Research can and should challenge dominant narratives, amplify marginalized voices, and make visible what is structurally suppressed.
AI role: Suspected reproducer of hegemonic patterns; potentially useful if actively resisted and its failures treated as analytically informative.
Representative sources:
- paulus-marone-qdas-discourse-2024 — critical discourse analysis of QDAS marketing; AI positioned as institutional force shaping what qualitative research means
- dahal-genai-qualitative-nepal-2024 — post-colonial; equity and access as the central AI concerns; Global South perspective
- anis-french-ai-qualitative-research-2023 — equity as one of the 3 Es; AI failures as data about whose voices are systematically excluded
- epistemic-flattening — names what critical theorists would predict: AI reproduces dominant, not marginal, meanings
- montrosse-moorhead-ai-evaluation-2023 — equity criterion (differential AI performance across subgroups) as the most underemphasized in the corpus
Hallmark: Attention to who benefits from AI adoption and who is harmed; examination of how AI encodes and reproduces structural bias; concern with access and infrastructure inequalities. The critical literature reads AI not just as a tool but as a political actor.
Risks in this corpus: Critical stance sometimes collapses into blanket rejection without examining specific practices; equity concerns sometimes remain abstract rather than being operationalized in specific research designs.
Pragmatist
Core assumption: The value of a method is its fitness for purpose. Epistemological purity is less important than solving real research problems. Methods should be selected and evaluated against practical outcomes.
AI role: Whatever works for the task at hand, evaluated against specific criteria of success. Eclecticism about methods; systematic about criteria.
Representative sources:
- montrosse-moorhead-ai-evaluation-2023 — Teasdale’s framework; criteria derived from practice rather than imposed from theory
- reeping-llm-quality-framework-2025 — Q3 Framework applied to LLMs; quality evaluated across 8 dimensions against specific research contexts
- nicmanis-spurrier-ai-guide-2025 — pragmatist mapping of research values to AI method choices
- hamilton-ai-qualitative-2023 — complementarity finding; value each approach for what it does well
- dahal-genai-qualitative-nepal-2024 — transparency as the practical ethical requirement
- greenhalgh-2026-beyond-the-binary — governance-oriented pragmatism; refuses both binary positions; proposes four practical governance questions as the productive path forward
- nguyen-trung-nita-2026 — explicitly pragmatist and nonpositivist; NITA is designed for researchers between small-q and Big-Q who want interpretive depth without committing to reflexive TA’s strict human-only requirement
Hallmark: Frameworks and criteria for choosing and evaluating methods; explicit acknowledgment of trade-offs rather than commitment to a single epistemological paradigm. “Does it work for this purpose?” as the operative question.
Risks in this corpus: Without epistemological grounding, pragmatism can slide into uncritical adoption; “what works” can be defined too narrowly (speed, reliability) and miss harder-to-measure goods (interpretive depth, equity, validity).
Post-humanist
Core assumption: The boundary between human and non-human is itself a construction. Meaning is not the exclusive property of human minds but emerges from sociotechnical networks in which humans, tools, institutions, and data are all actants. The question is not whether AI “really” understands but how AI is enrolled in research practices and what those enrollments produce.
AI role: Actant in a sociotechnical network; neither tool (passive) nor replacement (autonomous) but co-constitutive participant. The relevant analysis is of the network, not the ontological status of any single node.
Representative sources:
- de-paoli-reject-rejection-2026 — ANT framework applied to the AI rejection debate; rejects human exceptionalism; treats the question of AI meaning-making as empirical, not philosophical
- brailas-ai-qualitative-research-2025 — relational/constructionist; AI as co-constitutive heuristic partner
- friese-caai-framework-2026 — dialogic; AI as interlocutor rather than instrument; epistemically symmetrical in the analytic encounter
- friese-et-al-beyond-binary-2026 — the most theoretically dense post-humanist/assemblage intervention in the corpus; four frameworks deployed simultaneously (Deleuze & Guattari’s assemblage theory, Hutchins’s distributed cognition, Barad’s posthumanist intra-action, Orlikowski’s sociomaterial entanglements) to contest the “exclusively human” meaning-making claim at its foundations
Hallmark: Latourian actor-network vocabulary; resistance to the human/non-human binary; emphasis on the sociotechnical network rather than individual researcher or individual tool. Where interpretivist scholars ask “does AI threaten the researcher’s interpretive authority?”, post-humanists ask “how does AI transform the network through which interpretation is produced?”
Risks in this corpus: Post-humanist framing can seem to sidestep the hard epistemological questions by dissolving them (if meaning is always already distributed, there’s no special problem with AI participation). Critics — especially those committed to Big-Q reflexive research — will argue that dissolving the human/non-human distinction doesn’t solve the practical problem of researcher immersion; it simply redefines the problem away.
The small-q / Big-Q fault line
The most important organizing tension in the corpus is not precisely between epistemological schools but between what Nicmanis & Spurrier call “small-q” and “Big-Q” qualitative research.
Small-q: Research that uses qualitative methods (interviews, observations, text) but shares post-positivist assumptions about systematic method, measurement, and reliability. Content analysis, QCA, survey-based thematic analysis. AI methods designed for this tradition scale well and produce measurable reliability gains.
Big-Q: Research that takes epistemological commitments to interpretation, reflexivity, and meaning-construction seriously. Reflexive TA, IPA, constructivist grounded theory, phenomenology, critical discourse analysis. AI methods designed for post-positivist traditions impose assumptions that conflict with Big-Q epistemology.
The mismatch is real and has consequences. (chatzichristos-ai-positivism-2025) documents it empirically: younger researchers trained in AI tools are importing positivist assumptions into interpretivist disciplines. (paulus-marone-qdas-discourse-2024) tracks it at the level of marketing discourse: QDAS companies sell AI to Big-Q researchers using language designed for small-q practices. ayik-et-al-2026-human-vs-ai-ta-tools provides behavioral confirmation: tool choice encodes epistemological orientation — ATLAS.ti AI and ChatGPT produce post-positivist output (frequency-based, high code count); QInsights and MAXQDA produce interpretivist output (dialogic, lower code count). The epistemological choice is made at tool selection, not just at method design.
dellafiore-et-al-2025-expert-interviews adds a practitioner layer: expert qualitative researchers from Italian socio-anthropological and healthcare contexts articulate AI’s risk as producing an “illusion of meaning” — outputs that appear interpretively meaningful but are algorithmically derived. The concern is not obvious error but subtle epistemological displacement that experienced researchers may not detect.
Epistemological stances and AI implications
| Stance | AI role | Key criterion | Key risk |
|---|---|---|---|
| Post-positivist | Coder / classifier | Reliability (κ, Jaccard) | Validity gap; missed minority voices |
| Interpretivist | Assistant / partner | Trustworthiness, reflexivity | Positivism creep; AI-led meaning-making |
| Critical | Object of critique | Equity, power | Blanket rejection; abstract concern |
| Pragmatist | Fit-for-purpose | Criteria alignment | Narrow definition of “works” |
| Post-humanist | Actant in network | Sociotechnical transformation | Dissolves rather than resolves hard questions |
The emerging synthesis
A tentative consensus is forming in the 2025–2026 literature around a position that is neither uncritical optimism nor blanket rejection:
- AI can legitimately assist with bounded, mechanical tasks at any epistemological position.
- AI cannot legitimately substitute for the human researcher’s interpretive competence, regardless of reliability metrics.
- The appropriate level of AI involvement varies by research tradition and must be declared transparently.
- Reflexivity about AI — including its biases, limitations, and the prompts that shaped its output — is now a methodological requirement, not an option.
This synthesis is most developed in friese-caai-framework-2026, carlsen-ralund-computational-grounded-theory-2022, brailas-ai-qualitative-research-2025, and xu-ai-thematic-analysis-2026. It remains contested — see contested-claims.
A sharper challenge to the synthesis has emerged with the Jowsey et al. open letter (419 signatories) calling for categorical rejection of GenAI in reflexive qualitative research. de-paoli-reject-rejection-2026 responds that the rejection rests on philosophical commitments (human exceptionalism, Searle’s Chinese Room) rather than methodological evidence. friese-et-al-beyond-binary-2026 extends this into a multi-framework philosophical response — the “exclusively human” claim is contestable not only through ANT but through distributed cognition, posthumanism, and sociomateriality simultaneously. The post-humanist coalition is consolidating; greenhalgh-2026-beyond-the-binary frames the debate as a governance question rather than a philosophical one. All three decline to take a binary position and propose instead that the epistemological traditions most relevant to qualitative research (distributed, relational, posthumanist) are precisely the ones that make categorical rejection hardest to sustain. The practical implications for specific research methods — whether these theoretical positions translate into methodological standards for responsible AI integration — remain underdeveloped and are the next site of theoretical work.
See also
- qualitative-ai-methods — taxonomy of approaches by AI role and epistemological fit
- human-ai-collaboration — frameworks for dividing analytic labor
- validity-trustworthiness — how rigor is conceptualized across traditions
- contested-claims — where the epistemological disagreements are sharpest; Claim 9 covers the rejection debate
- epistemic-flattening — the epistemological risk of AI-led pattern discovery
- computational-grounded-theory — CGT and CALM as case study in epistemological debate
- de-paoli-reject-rejection-2026 — post-humanist/ANT rebuttal of categorical rejection
What links here
- Ayik et al. (2026) — Human vs. AI: Evaluating TA With ChatGPT, QInsights, ATLAS.ti AI, and MAXQDA AI Assist
- Brailas (2025) — AI in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine
- Contested Claims
- De Paoli (2026) — Why We Should Reject to Reject the Use of Generative AI in Qualitative Analysis
- Epistemic Flattening
- Friese, Nguyen-Trung, Powell & Morgan (2026) — Beyond Binary Positions
- Greenhalgh (2026) — Reflexive Qualitative Research and Generative AI: A Call to Go Beyond the Binary
- AI in Qualitative Research
- Human-AI Collaboration — Frameworks and Models
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- LLMs for Qualitative Research
- Qualitative AI Methods — A Living Taxonomy
- Validity and Trustworthiness