| url | https://doi.org/10.1177/16094069251383739 |
|---|---|
| raw | raw/costa-et-al-2025-ai-as-a-co-researcher-in-the-qualitative-research-workflow-transforming-human-ai-collaboration.pdf |
TL;DR: Proposes AbductivAI — a framework grounded in Actor-Network Theory and distributed cognition that positions AI as a genuine co-researcher, not a tool. The technical innovation is Chain-of-Prompting (structured iterative prompt sequences that build on each other). Tested on 323 conference abstracts. The key condition: human reflexive authority throughout the process is non-negotiable.
Problem
The dominant AI-in-qualitative-research literature treats the question as: how much of what humans do can AI replicate? This produces comparisons (κ scores, theme concordance rates, Jaccard indices) that measure AI’s performance against a human baseline. The implicit assumption is that human qualitative analysis is the gold standard and AI is being evaluated for its ability to approximate it.
Costa et al. reframe the question: what kind of knowledge production becomes possible when human and AI are genuine analytical partners, contributing different capabilities to a shared interpretive process? This is not asking whether AI can do what humans do — it is asking what humans and AI can do together that neither can do alone.
The problem this frames: current AI applications in qualitative research are either too tool-like (apply codes, measure agreement) or too autonomous (outsource interpretation). Neither captures the potential of genuine human-AI collaboration in which abductive insight — the creative inference that generates new understanding — emerges from the interaction itself.
Approach
AbductivAI rests on three theoretical foundations:
Actor-Network Theory (ANT). In ANT, nonhuman entities — including technologies — are genuine participants in networks of knowledge production, not mere instruments. AI as co-researcher is not a metaphor; it is an ontological claim about what counts as an agent in a research assemblage.
Distributed cognition. Analytical intelligence is not located in the researcher’s mind or in the AI system — it is distributed across the interaction between them. The unit of analysis for understanding qualitative research quality is the human-AI system, not the human or the AI separately.
Abductive reasoning. The inferential mode that distinguishes AbductivAI: abduction generates plausible interpretations that go beyond what the data logically entails. It is not induction (generalizing from examples) or deduction (applying rules to cases) but creative inference — the “aha” moment that produces new conceptual understanding. Costa et al. argue that abduction is the logical mode most appropriate to qualitative inquiry, and that Chain-of-Prompting can elicit it from a human-AI dialogue.
Chain-of-Prompting is the technical implementation: structured prompt sequences in which each AI response becomes the basis for the next prompt, creating iterative interpretive loops. This is related to chain-of-thought prompting in the AI literature but adapted for qualitative analysis: the goal is interpretive depth, not logical step-by-step reasoning.
The framework was tested on 323 abstracts from the World Conference on Qualitative Research — a dataset not originally intended for research, making it an exploratory proof-of-concept rather than a rigorous evaluation.
AI’s Role
AI is positioned as a co-researcher and epistemic agent — the most ambitious framing in the corpus. This goes beyond “second coder” (Bijker et al.), “heuristic partner” (Brailas), or “research assistant” (Anis & French). In the AbductivAI frame, AI is a participant in knowledge production, not an instrument of it.
The paper is careful about what this does and does not mean. The key condition: “interactions between humans and AI agents can benefit qualitative data analysis if the human recognizes and defines their role as a reflexive agent responsible for the entire process.” Co-researcher status for AI does not imply equal authority. The human remains the epistemological anchor — the entity whose reflexive awareness ensures that the analysis is grounded, ethically defensible, and interpretively accountable.
Epistemological Stance
Sociomaterial / ANT-informed, with an explicit constructionist lean. The epistemological assumption is that meaning is not extracted from data but produced through the interaction between researcher, AI, and data. ANT provides the framework for understanding AI as a genuine actor in this production process without making implausible claims about AI consciousness or intentionality.
The abductive reasoning emphasis aligns with an epistemology that prizes creative interpretation over systematic description. This connects AbductivAI to the phenomenological and grounded theory traditions within qualitative research — both of which valorize the interpretive leap as the signature methodological contribution.
Rigor and Trustworthiness
The conference abstract dataset is a weak test case: abstracts are short, structured, and written for scholarly audiences — more homogeneous and less contextually rich than interview transcripts or field notes. The proof-of-concept demonstration shows that Chain-of-Prompting can surface themes across a corpus, but not that it handles the interpretive complexity of typical qualitative data.
The theoretical apparatus (ANT, distributed cognition, abduction) is sophisticated and internally coherent. The gap between this theoretical richness and the slender empirical demonstration is the paper’s main credibility challenge.
The paper does not provide quantitative evaluation of AbductivAI’s outputs. Whether the themes and interpretations it produces are “better” than those produced by conventional AI coding or human coding alone is not assessed.
Limitations
The conference abstract corpus is methodologically convenient but epistemologically thin. It does not test AbductivAI under the conditions where it would be most valuable — rich, emotionally weighted, contextually specific qualitative data.
The ANT framing, while philosophically productive, may be difficult for researchers outside science and technology studies to operationalize. The practical guidance for implementing AbductivAI in a typical qualitative study is underdeveloped relative to the theoretical ambition.
The paper does not address reproducibility or how to evaluate Chain-of-Prompting quality across different researchers or AI sessions. Two researchers using the same protocol with the same data will produce different dialogues and likely different interpretations — which is consistent with the epistemological commitments but leaves quality assessment underdetermined.
Connections
- llm-qualitative-research — the broader landscape
- prompt-engineering — Chain-of-Prompting is an advanced, theory-grounded contribution to the prompting literature
- friese-caai-framework-2026 — parallel framework using conversational/dialogic analysis; both reject one-shot coding in favor of iterative dialogue; compare their epistemological groundings
- brailas-ai-qualitative-research-2025 — philosophical alignment on abductive AI use; Brailas develops the epistemological critique, Costa et al. develop the alternative framework
- epistemic-flattening — what AbductivAI’s abductive questioning is designed to resist
- human-ai-collaboration — the co-researcher model is the most ambitious formulation of human-AI collaboration in the corpus
- validity-trustworthiness — the distributed-cognition framing has implications for how trustworthiness is conceived and evaluated
- computational-grounded-theory — the classical coding paradigm that AbductivAI challenges
What links here
- Chatzichristos (2025) — Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?
- Contested Claims
- Dahal (2024) — How Can Generative AI Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal
- Friese (2026) — From Coding to Conversation: A New Methodological Framework for AI-Assisted Qualitative Analysis
- Friese, Nguyen-Trung, Powell & Morgan (2026) — Beyond Binary Positions
- Hamilton et al. (2023) — Exploring the Use of AI in Qualitative Analysis: A Comparative Study of Guaranteed Income Data
- AI in Qualitative Research
- Human-AI Collaboration — Frameworks and Models
- Index
- Qualitative AI Methods — A Living Taxonomy
- Validity and Trustworthiness
- Williams (2024) — Paradigm Shifts: Exploring AI's Influence on Qualitative Inquiry and Analysis