| url | https://doi.org/10.1177/10778004251401851 |
|---|---|
| raw | raw/jowsey-et-al-2025-we-reject-the-use-of-generative-artificial-intelligence-for-reflexive-qualitative-research.pdf |
TL;DR: An open letter signed by 419 experienced qualitative researchers from 32 countries calling for categorical rejection of generative AI in Big-Q reflexive qualitative research. The rejection rests on three pillars: GenAI cannot make meaning; reflexive qualitative research must remain distinctly human; GenAI’s environmental and social justice harms are unacceptable. The signatories include Virginia Braun and Victoria Clarke, the architects of reflexive thematic analysis — making this an authoritative statement from within the tradition being discussed.
What it means
This is not an ordinary journal article. It is a coordinated public position statement — a letter circulated on October 13, 2025, through academic networks and social media (LinkedIn, Facebook), kept open for 10 days, and submitted to Qualitative Inquiry with 419 signatories. The weight of the document lies in who signed it: not just Jowsey and colleagues, but the founders of reflexive thematic analysis (Virginia Braun, Victoria Clarke), prominent critical psychologists (Michelle Fine), and sociologists of technology (Deborah Lupton). When the creators of a method say AI cannot be used in that method, that is a methodologically significant claim, not just an opinion.
The letter is precise about its scope. It targets Big-Q qualitative approaches — reflexive TA, phenomenology, IPA, discourse analysis, ethnography — and explicitly exempts small-q approaches: “just as the meaning-based requirement of reflexive thematic analysis distinguishes it methodologically from word-counting techniques such as content analysis (which can be automated), so too it must also exclude GenAI.” This is not a blanket rejection of computation in qualitative research; it is a claim that the specific epistemological commitments of reflexive qualitative work are incompatible with what GenAI can do.
The letter generated immediate responses. de-paoli-reject-rejection-2026 rejects the rejection on philosophical and political grounds. Greenhalgh (forthcoming in this wiki) declined to sign and argued for moving beyond the binary. This cluster of publications represents the field’s most public and consequential methodological debate since Braun & Clarke’s original formulation of reflexive TA.
The argument
Point 1: GenAI cannot make meaning. The letter opens with a clean philosophical claim: GenAI is “simulated intelligence only, based on statistical predictive algorithms without any understanding of the world, or the meaning of the language that constitutes the data.” It can produce something that “superficially resembles” reflexive qualitative analysis — but resemblance is not identity. Reflexive TA is by definition meaning-based; GenAI is by definition incapable of meaning. Therefore GenAI cannot perform reflexive TA, even with human involvement at various stages.
The letter connects this to epistemic-flattening: GenAI’s statistical nature “predisposes GenAI to identify, replicate, and reinforce dominant language and patterns; risking the further quieting of marginal voices and practices, including those of critical scholars.” The voices of those who “live/breathe/feel/imagine/construct knowledge in the maroons of life” may be “lost or worse; sacrificed.” This is the most evocative passage in the letter — and the one most directly grounded in the critical tradition that some signatories represent.
Point 2: Reflexive qualitative research must remain distinctly human. The second argument shifts from what GenAI cannot do to what reflexive research requires. It is “undertaken by humans, with or about humans, and for the benefit of humans.” Only a human researcher can undertake reflexive analytical work because the process requires “psychodynamic interpretations” anchored in the researcher’s own humanity — their subjectivity, positionality, and reflexive self-awareness.
The letter also cites the risk of what might be called AI deference: “our desire for GenAI to be reliable reduces our capacity to critically appraise GenAI outputs.” This echoes chatzichristos-ai-positivism-2025's concern about AI adoption reducing critical self-examination, and points toward a practical rather than philosophical failure mode — researchers accepting AI outputs because skepticism feels futile or professionally risky.
Point 3: Environmental and social justice harms. The third pillar is explicitly ethical rather than methodological. The signatories raise the full suite of GenAI harms: data center water and energy use, land clearing, greenhouse gas emissions, e-waste, and — most pointed for a social justice-oriented community — the psychological harm to data workers in the Global South who moderate toxic content to train these models. The letter cites critics who characterize Big AI Tech’s practices as “extractivist, racist, imperialist, and exploitative.”
This argument is not contingent on the first two. Even if GenAI could make meaning (a claim the authors deny), its environmental and labor costs would still constitute grounds for rejection. The three arguments are presented as cumulative; each is independently sufficient.
Epistemological stance
Strongly interpretivist, critical, and humanist. The letter is grounded in Braun & Clarke’s constructivist-constructionist version of reflexive TA — a tradition that has spent a decade resisting the drift toward reliability-focused, small-q practice. The signatories include leading voices from feminist psychology, critical sociology, and social justice research; the critical framing of AI as an institutional force producing and reproducing power relations is present throughout.
The letter is explicitly humanist in the sense of treating human subjectivity as irreducible. The researcher’s humanity is not a contingent feature of reflexive research that could be designed around; it is constitutive of the practice. This puts the letter in direct tension with the post-humanist framing of de-paoli-reject-rejection-2026 and brailas-ai-qualitative-research-2025, both of which dissolve the human/non-human boundary that the letter treats as foundational.
Rigor and trustworthiness
This is a position paper, not an empirical study; conventional rigor criteria do not apply. The strength of the document lies in its intellectual pedigree and collective endorsement rather than in methodological evidence. The letter does not engage with empirical findings from AI-assisted research studies — neither the positive results (bennis-ai-thematic-analysis-2025, bijker-chatgpt-qca-2024) nor the negative ones (jowsey-frankenstein-ai-ta-2025, which is a different paper by overlapping authors). It argues from principle, not from evidence.
Limitations
- No engagement with empirical literature. The letter’s philosophical claims are asserted rather than defended against the body of empirical research that has tested AI in qualitative contexts. Readers who are persuaded by the reliability data in the small-q literature will not find engagement with that evidence here.
- The meaning-making claim is contested philosophy. Whether AI “genuinely” makes meaning is a live debate in philosophy of mind; asserting it is settled methodology is exactly what De Paoli objects to. The letter would be stronger with more sustained engagement with this objection.
- Scope of “Big-Q” is underspecified. The letter exempts content analysis but does not address the full range of AI uses short of full analytic replacement — transcription, translation, memo prompting, pattern checking, literature synthesis. The binary (acceptable vs. not) may be too coarse.
- Sampling bias in signatories. The letter was circulated through personal networks and social media over 10 days. This is an efficient process, but not a representative one. Signatories are self-selected by sympathy with the position; no information is provided about who was contacted and declined to sign.
- Environmental argument applied unevenly. If environmental harm is grounds for rejecting GenAI in qualitative research, it is equally grounds for rejecting it in many other academic practices (systematic reviews, literature searches, email, administrative tools). The letter does not explain why qualitative research specifically bears this ethical burden.
Connections
- Directly prompted response: de-paoli-reject-rejection-2026 — philosophical rebuttal; argues this letter is philosophy masquerading as methodology; uses ANT to reframe the question as empirical
- Also prompted: greenhalgh-2026-beyond-the-binary — declines to sign; argues for moving beyond the binary framing; reframes around governance rather than epistemology
- Most theoretically grounded counter-response: friese-et-al-beyond-binary-2026 — 100+ signatory response by Friese, Nguyen-Trung, Powell & Morgan; deploys assemblage theory, distributed cognition, posthumanism, and sociomateriality against the “exclusively human” meaning-making premise; also identifies the internal tension between this letter and Braun & Clarke’s prior insistence on reflexive TA as flexible
- Distinct from: jowsey-frankenstein-ai-ta-2025 — different paper, same lead author; the PLOS One empirical study of Copilot vs. published TA datasets (58% fabricated quotes); the empirical findings there would support but are not cited in this letter
- Epistemic flattening claim connects to: epistemic-flattening — the structural tendency to suppress marginal voices is central to Point 1
- Environmental/justice arguments connect to: ai-research-ethics, dahal-genai-qualitative-nepal-2024 — Global South labor concerns
- Human exceptionalism vs. post-humanist position: brailas-ai-qualitative-research-2025, de-paoli-reject-rejection-2026 — the other side of the human/non-human debate
- Big-Q/small-q distinction connects to: epistemology, nicmanis-spurrier-ai-guide-2025 — mapping of epistemological values to AI method choices
- The deference/reliability concern connects to: chatzichristos-ai-positivism-2025 — empirical evidence that AI adoption reduces critical appraisal capacity
- Foundational methodological source: Braun & Clarke (2019) reflexive TA — the method this letter is defending
- The debate it anchors: contested-claims — Claim 9 (philosophical vs. methodological framing of rejection)
What links here
- AI Research Ethics
- Andrews, Fainshmidt & Gaur (2026) — Progress or Perish: IB and AI Adoption
- Contested Claims
- De Paoli (2026) — Why We Should Reject to Reject the Use of Generative AI in Qualitative Analysis
- Dellafiore et al. (2025) — Artificial Intelligence in Qualitative Research: Insights From Experts via Reflexive Thematic Analysis
- Epistemology — Stances Across the Literature
- Friese, Nguyen-Trung, Powell & Morgan (2026) — Beyond Binary Positions
- Greenhalgh (2026) — Reflexive Qualitative Research and Generative AI: A Call to Go Beyond the Binary
- AI in Qualitative Research
- Human-AI Collaboration — Frameworks and Models
- Index
- LLMs for Qualitative Research
- Nguyen-Trung & Nguyen (2026) — Narrative-Integrated Thematic Analysis (NITA)
- Qualitative AI Methods — A Living Taxonomy
- Validity and Trustworthiness
- Wise et al. (2026) — Why AI is Not the Enemy: Trustworthy AI-in-the-Loop Analysis