| url | https://doi.org/10.1177/10778004261429383 |
|---|---|
| raw | raw/greenhalgh-2026-reflexive-qualitative-research-and-generative-ai-a-call-to-go-beyond-the-binary.pdf |
TL;DR: A two-page editorial by Trisha Greenhalgh (Oxford), who declined to sign the Jowsey et al. open letter. She sympathizes with the letter’s core concerns but argues that its binary framing — uncritical adopters versus principled refusers — forecloses principled middle positions and moralistic solidarity performs qualitative values rather than enacting them. Her reframe: stop asking whether AI can make meaning; start asking whether AI use preserves the researcher’s epistemic authority. Proposes four governance-oriented questions for the community.
What it means
Greenhalgh is a prominent qualitative researcher in health sciences at Oxford, dual-qualified in medicine and social science. Her decision to decline the Jowsey letter — and then to publish an editorial explaining why — is a methodologically significant act. She is not dismissing the letter’s concerns; she explicitly endorses several of them (GenAI cannot make meaning, interpretation is uniquely human, environmental harms are real). What she rejects is the conclusion those concerns are supposed to generate: categorical rejection of all AI use in reflexive qualitative research.
The editorial is very short — two pages — but its argument is precise. It makes three moves that are substantively distinct from both the rejection position (jowsey-et-al-2025-we-reject) and the pragmatist defence (de-paoli-reject-rejection-2026):
-
A framing critique: The binary positions researchers as either modernists/qualitative positivists (superficial, ethically vacuous) or conscientious objectors (on the moral high ground). This framing leaves no space for principled disagreement or careful middle positions. Position statements that function as “boundary markers of legitimacy may encourage symbolic alignment rather than thoughtful methodological judgment.”
-
A reframe of the key question: The letter asks whether AI can make meaning (answer: no). Greenhalgh argues this is the wrong question. The right question is whether AI use displaces, obscures, or constrains the researcher’s reflexive engagement with data. Some AI uses will; others won’t. Categorical rejection cannot distinguish between them.
-
A governance turn on ethics: Environmental and social harms from GenAI are “legitimate and urgent” — but refusing categorical engagement with governance (scale of use, tool choice, institutional accountability, comparative environmental cost) cedes those decisions to commercial actors. A reflexive stance toward AI must include critical engagement with how harms might be mitigated, not only principled refusal.
The four questions
Greenhalgh closes by proposing four questions for collective community deliberation — not answers, but a research and debate agenda:
- Epistemic authority: Where exactly should epistemic authority reside in qualitative research, and how can it be protected regardless of which tools are employed?
- The human-AI distinction: How might we meaningfully distinguish AI-led analytic practices that displace interpretation from human-led practices in which AI is strictly subordinate?
- Governance: What forms of governance, transparency, and accountability address environmental, social, and epistemic harms — rather than treating refusal as the only ethical response?
- Reflexive debate: How can qualitative researchers debate these issues without reproducing moral binaries that undermine the reflexive, dialogic ethos the field depends on?
These four questions are methodologically productive in a way the open letter is not: they are answerable, at least partially, by empirical and theoretical work. The corpus in this wiki is partly constituted by that work — each question has multiple sources bearing on it.
Epistemological stance
Reflexivist and pragmatist-adjacent. Greenhalgh does not dispute the interpretivist foundations of reflexive qualitative research; she disputes the inference that those foundations require categorical AI exclusion. Her frame is governance-oriented: she is asking who holds authority, over what decisions, under what accountability conditions — which is a practical rather than philosophical question. Notably, she declares in her conflict of interest statement that she “has used AI to support scholarly inquiry” — a transparency move that is itself a governance practice.
The editorial does not engage deeply with the philosophical disputes about meaning-making; it brackets them as secondary to the governance question. This is a deliberate choice that some interpretivist readers will find unsatisfying. But it is also a productive reframing: governance questions can be discussed and acted on even if the philosophical questions remain unresolved.
Rigor and trustworthiness
This is an invited editorial, not a research article; conventional rigor criteria do not apply. The argument is brief but tightly structured. Greenhalgh does not offer evidence for her claims — she argues from principle — but the argument is clear and the move from the letter’s framing to her reframe is logically valid. The four questions she poses are well-formed research questions that could be addressed empirically.
Limitations
- Very short. Two pages is not enough to develop the governance argument to the point where it becomes actionable. The four questions are more of a research agenda than a practical framework.
- Does not engage with empirical literature. Like the Jowsey letter and De Paoli’s response, this editorial is argument not evidence. The empirical studies (hallucination rates, reliability findings, platform comparisons) are not discussed.
- The epistemic authority reframe is not operationalized. “Whether AI use displaces, obscures, or constrains the researcher’s reflexive engagement” is a better question than “can AI make meaning?” — but Greenhalgh does not provide criteria for answering it. wise-et-al-2026-ai-not-the-enemy does more of this work.
- The governance turn is underdeveloped. Greenhalgh argues that refusing governance engagement cedes decisions to commercial actors; this is a strong point. But she does not describe what governance structures would actually look like, who would implement them, or what standards would apply.
Connections
- Responds directly to: jowsey-et-al-2025-we-reject — declined to sign; sympathizes with concerns; rejects the conclusion
- Complements (from a different direction): de-paoli-reject-rejection-2026 — both decline the letter; De Paoli attacks the philosophical structure of the rejection; Greenhalgh attacks the binary framing and proposes governance
- Epistemic authority frame connects to: human-ai-collaboration — the question of where authority resides is the organizing question across all the frameworks
- Governance framing connects to: ai-research-ethics — Greenhalgh explicitly calls for governance-oriented responses rather than refusal-as-ethics
- The “displaces vs. supports” distinction connects to: wise-et-al-2026-ai-not-the-enemy — AI-in-the-loop analysis is designed precisely to ensure AI supports rather than displaces researcher judgment
- The moralization concern connects to: contested-claims — Claim 9 (philosophical vs. methodological framing); Greenhalgh adds a third dimension: the social dynamics of how the debate is conducted
- The boundary-marker critique connects to: chatzichristos-ai-positivism-2025 — if position statements produce symbolic alignment rather than thoughtful judgment, they risk the same epistemological shortcuts the letter criticizes AI for producing
- Environmental governance argument connects to: davison-ethics-genai-2024 — ATLAS.ti data incident as an example of commercial actors shaping practices in the absence of researcher governance
- Four-question agenda connects to: epistemology — questions 1 and 2 are at the core of what epistemological commitments require in practice
What links here
- Contested Claims
- De Paoli (2026) — Why We Should Reject to Reject the Use of Generative AI in Qualitative Analysis
- Dellafiore et al. (2025) — Artificial Intelligence in Qualitative Research: Insights From Experts via Reflexive Thematic Analysis
- Epistemology — Stances Across the Literature
- Friese, Nguyen-Trung, Powell & Morgan (2026) — Beyond Binary Positions
- AI in Qualitative Research
- Human-AI Collaboration — Frameworks and Models
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- LLMs for Qualitative Research
- Qualitative AI Methods — A Living Taxonomy