| url | https://doi.org/10.1177/00076503231163286 |
|---|---|
| raw | raw/anis-french-2023-efficient-explicatory-and-equitable-why-qualitative-researchers-should-embrace-ai-but-cautiously.pdf |
TL;DR: A compact, well-structured argument for adopting AI in qualitative research, organized around three concrete benefits — efficiency, explicatory power (including through failures), and equity for marginalized researchers. Two sharp pitfall warnings prevent this from being uncritical boosterism. Published early in the post-ChatGPT conversation; its framing has proved durable.
Problem
In the months immediately following ChatGPT’s public release (November 2022), debate in academic circles focused almost entirely on plagiarism, research integrity, and authorship. Anis & French argue this is the wrong frame. The ethical concerns are real, but the more important question is whether qualitative researchers — particularly those working with large-scale unstructured data — should actively incorporate AI into their practice. Their answer is yes, with conditions.
The paper addresses a specific structural problem in qualitative research: analysis has not scaled at the same rate as data collection. A researcher who can now gather 10,000 tweets, 500 interview transcripts, or a decade of news coverage still faces the same labor-intensive coding process that has always characterized qualitative work. AI changes this equation.
Approach
This is a theoretical commentary — no original empirical data, but a structured argument built on the existing literature and illustrative examples. The 3E framework (Efficient, Explicatory, Equitable) is the paper’s organizational spine.
Efficient: AI can act as an extension of the researcher’s capacity to read and identify patterns in large corpora. The researcher focuses on theorizing and interpretive refinement; AI handles initial pattern flagging and code application. This is not about replacing the researcher — it is about changing the division of cognitive labor.
Explicatory: The counterintuitive insight. AI’s failures are analytically productive. When a text segment cannot be classified into any predefined code, this signals complexity — multiple layers of meaning, ambiguity, irony, metaphor — that warrants close reading. Adding a “failure cases” pseudo-code to the coding scheme creates a systematic mechanism for surfacing these theoretically rich outliers. The cases AI cannot handle are often the cases most worth reading closely.
Equitable: AI has the potential to lower structural barriers for researchers from marginalized backgrounds. Language proficiency, access to mentorship, familiarity with theoretical conventions, and proximity to interpretive communities — all of these confer advantages in qualitative research. AI cannot eliminate these inequities, but it can partially compensate for them, enabling researchers who might otherwise be confined to empirical work to engage more fully in theoretical interpretation.
AI’s role
AI is positioned as a tool under researcher direction — a capable assistant that handles mechanical tasks and surfaces patterns, but never interprets or authors. The interpretive repertoire — the values, assumptions, and theoretical frameworks that structure analysis — must remain with the human researcher. This is not just an ethical claim but a practical one: without researcher-defined interpretive structure, AI output cannot be meaningfully evaluated or corrected.
Epistemological stance
Pragmatist, with an implicit post-positivist lean. The paper does not engage with the epistemological debates in qualitative methodology — it does not distinguish small q from Big Q, does not address constructionist concerns about researcher positionality, and does not discuss reflexivity. The evaluation criteria are practical: does AI produce useful output that advances research? The epistemological assumptions are largely those of applied social science research in business and society contexts.
This is a limitation but also a feature: the paper reaches audiences who might find epistemological argumentation off-putting.
Rigor and trustworthiness
As a commentary, the paper’s claims are not empirically tested. The efficiency and equity arguments are plausible and well-illustrated but rest on hypothetical scenarios rather than evidence. The explicatory argument is the most original and the least substantiated — it is presented as a methodological innovation, but its effectiveness has not been formally tested across studies.
Limitations
The paper does not engage with the epistemological literature on qualitative methodology. This makes it accessible but also limits its reach: researchers in interpretivist or constructionist traditions will find the framework insufficient for their needs. The equity argument, while important, remains underdeveloped — it does not engage with the structural inequities in AI system design (training data bias, English-language dominance) that may reproduce rather than counteract research hierarchies.
The two-pitfall framing (AI is a tool, not an author; AI encodes biases) is correct but thin. The Amazon hiring algorithm example is vivid, but the paper does not give researchers practical tools for detecting or counteracting training data bias in their specific domain.
Connections
- llm-qualitative-research — the broader landscape
- epistemic-flattening — the training data bias point, underdeveloped here but critical
- ai-research-ethics — the pitfall warnings point toward the ethics literature
- brailas-ai-qualitative-research-2025 — the more epistemologically sophisticated version of the same fundamental position
- jowsey-frankenstein-ai-ta-2025 — the empirical evidence that AI’s “failures” (here framed as analytically useful) can also be fabrications
- carlsen-ralund-computational-grounded-theory-2022 — the methodological framework for keeping humans as interpretive ground truth while using AI for discovery
- dahal-genai-qualitative-nepal-2024 — extends the equity argument from a lived Global South experience
What links here
- AI Research Ethics
- Christou (2023) — How to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research?
- Dahal (2024) — How Can Generative AI Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal
- Epistemic Flattening
- Epistemology — Stances Across the Literature
- Hamilton et al. (2023) — Exploring the Use of AI in Qualitative Analysis: A Comparative Study of Guaranteed Income Data
- AI in Qualitative Research
- Index
- LLMs for Qualitative Research
- Perkins & Roe (2024) — The Use of Generative AI in Qualitative Analysis: Inductive Thematic Analysis with ChatGPT
- Prescott et al. (2024) — Comparing the Efficacy and Efficiency of Human and Generative AI: Qualitative Thematic Analyses
- Sakaguchi et al. (2025) — Evaluating ChatGPT in Qualitative Thematic Analysis in the Japanese Clinical Context
- Salazar et al. (2025) — Comparison of Qualitative Analyses Conducted by Artificial Intelligence Versus Traditional Methods
- Sinha et al. (2024) — The Role of Generative AI in Qualitative Research: GPT-4's Contributions to a Grounded Theory Analysis
- Williams (2024) — Paradigm Shifts: Exploring AI's Influence on Qualitative Inquiry and Analysis