| url | https://doi.org/10.3390/psycholint7030078 |
|---|---|
| raw | raw/Brailas_psycholint-07-00078-v2.pdf |
TL;DR: The most epistemologically sophisticated critique of AI-assisted qualitative analysis in the corpus. Brailas argues that using AI to outsource thematic analysis is not merely inefficient — it is epistemologically incoherent within interpretive traditions, and risks destroying the reflexive, relational foundations of qualitative inquiry. The alternative: AI as abductive heuristic partner, not replacement coder.
Problem
The dominant narrative in AI-assisted qualitative research is one of efficiency: AI makes coding faster, cheaper, and more consistent. Brailas’s problem with this narrative is not that it’s wrong on its own terms, but that it’s answering the wrong question. The efficiency gain is real — but it comes at the cost of what qualitative research is actually for.
The problem Brailas addresses is epistemological: most AI-assisted qualitative research is implicitly importing a post-positivist framework (reliability, replicability, objectivity as aspirations) into research traditions — constructivism, second-order cybernetics, social constructionism — where those values are not just absent but actively in tension with the research goals. When a constructionist researcher uses AI to achieve high intercoder reliability, they are not doing their research better; they are doing a different kind of research.
Approach
This is a conceptual-methodological review rather than an empirical study. Brailas synthesizes the emerging AI-in-qualitative-research literature through two theoretical lenses: relational and social constructionist epistemologies, and second-order cybernetics (the tradition that treats researchers as participants in the phenomena they study, not detached observers).
The paper is organized around a fundamental distinction: small q qualitative research (positivist/post-positivist; prioritizes reliability, replicability, measurable output) versus Big Q qualitative research (interpretivist, social constructionist, critical; meaning is co-constructed, situated, reflexive). This distinction, drawn from Kidder & Fine (1987) and developed by Braun & Clarke, is Brailas’s diagnostic tool. The argument: AI integrates naturally into small q work; its integration into Big Q work requires fundamental reconceptualization.
The paper’s positive proposal — AI as abductive heuristic partner — is operationalized in a set of seven best-practice guidelines and a list of reflexive prompt formulations.
AI’s role
AI is positioned in two ways: as a threat (when used to outsource interpretive labor) and as a heuristic partner (when used abductively to surface contradictions, silences, and unexpected patterns). The threat is not the technology itself but the epistemological assumptions that typically accompany its use.
The paper’s most important claim: LLMs generate the most statistically probable continuation of a pattern. This is not a bug — it’s the design. But it means AI systematically produces the expected, the dominant, the already-known — which is precisely what interpretive qualitative research needs to disrupt. This is epistemic-flattening as a structural feature of the technology, not a correctable limitation.
Epistemological stance
Explicitly relational and social constructionist, with second-order cybernetics as an additional frame. Brailas treats the researcher’s positionality not as a source of bias to be controlled (the post-positivist view) but as a constitutive resource — the researcher’s cultural, relational, and historical embeddedness is what enables rich, situated understanding.
This makes the paper distinctive in the corpus: it is the only source that grounds its critique in a fully articulated epistemological position rather than methodological caution or ethical concern. The critique of AI outsourcing follows logically from the epistemological premises; it is not merely an intuition dressed up as argument.
Rigor and trustworthiness
Within its own framework, the paper is rigorous. The conceptual argument is internally consistent, the epistemological position is explicitly stated, and the practical implications follow from the argument rather than being bolted on. The best-practice guidelines are grounded in the theoretical analysis rather than being generic.
The paper explicitly acknowledges its limitation: it is anticipatory and speculative, written at a moment when the research terrain is still forming. It makes no pretense of comprehensiveness.
Limitations
The paper’s scope is primarily Big Q traditions within psychology — interpretative phenomenological analysis, social constructionism, second-order cybernetics. Small q researchers will find the critique less directly relevant. The paper also does not engage deeply with the empirical literature: it cites the emerging studies on AI reliability but does not analyze their methodology in detail. Brailas’s claims about what AI can and cannot do are theoretical rather than empirically tested.
The seven best practices are useful but somewhat general. The reflexive prompt examples (“What contradictions or silences do you see?”) point in the right direction but leave substantial work to the practitioner.
Connections
- epistemic-flattening — Brailas provides the most developed theoretical account of this concept
- llm-qualitative-research — the landscape this paper critically intervenes in
- prompt-engineering — redefined here as a reflexive, abductive practice rather than technical instruction-writing
- epistemology — the paper’s central contribution is epistemological; it maps the philosophical terrain
- human-ai-collaboration — the heuristic partner model proposed here
- carlsen-ralund-computational-grounded-theory-2022 — parallel methodological critique from a different tradition
- chatzichristos-ai-positivism-2025 — empirical evidence for Brailas’s concern about positivism creep
- contested-claims — the small q vs. Big Q diagnosis is contested; some researchers reject the distinction as too rigid
What links here
- AI Research Ethics
- Anis & French (2023) — Efficient, Explicatory, and Equitable: Why Qualitative Researchers Should Embrace AI, but Cautiously
- Carlsen & Ralund (2022) — Computational Grounded Theory Revisited: From Computer-Led to Computer-Assisted Text Analysis
- Chatzichristos (2025) — Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?
- Christou (2023) — How to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research?
- Computational Grounded Theory
- Contested Claims
- Costa et al. (2025) — AI as a Co-researcher in the Qualitative Research Workflow: Transforming Human-AI Collaboration
- Dahal (2024) — How Can Generative AI Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal
- Davison et al. (2024) — The Ethics of Using Generative AI for Qualitative Data Analysis
- De Paoli (2026) — Why We Should Reject to Reject the Use of Generative AI in Qualitative Analysis
- Epistemic Flattening
- Epistemology — Stances Across the Literature
- Friese (2026) — From Coding to Conversation: A New Methodological Framework for AI-Assisted Qualitative Analysis
- Goyanes et al. (2025) — Thematic Analysis of Interview Data with ChatGPT: Designing and Testing a Reliable Research Protocol
- AI in Qualitative Research
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- LLMs for Qualitative Research
- Montrosse-Moorhead (2023) — Evaluation Criteria for Artificial Intelligence
- Naeem et al. (2025) — Thematic Analysis and Artificial Intelligence: A Step-by-Step Process for Using ChatGPT
- Nicmanis & Spurrier (2025) — Getting Started with AI-Assisted Qualitative Analysis: An Introductory Guide
- Paulus & Marone (2024) — "In Minutes Instead of Weeks": Discursive Constructions of Generative AI and Qualitative Data Analysis
- Qualitative AI Methods — A Living Taxonomy
- Reeping et al. (2025) — Interrogating the Use of LLMs in Qualitative Research Using the Q3 Framework
- Übellacker (2024) — AcademiaOS: Automating Grounded Theory Development with Large Language Models
- Validity and Trustworthiness
- Wheeler (2026) — Technological Reflexivity in Practice: How MAXQDA, NVivo, and ChatGPT Shape Qualitative Survey Analysis
- Williams (2024) — Paradigm Shifts: Exploring AI's Influence on Qualitative Inquiry and Analysis
- Wise et al. (2026) — Why AI is Not the Enemy: Trustworthy AI-in-the-Loop Analysis
- Xu (2026) — Doing Thematic Analysis in the Age of Generative AI: Practices, Ethics and Reflexivity