Source
urlhttps://doi.org/10.1177/10778004261425137
rawraw/de-paoli-2026-why-we-should-reject-to-reject-the-use-of-generative-artificial-intelligence-in-qualitative-analysis-a.pdf

TL;DR: A direct philosophical rebuttal of the Jowsey et al. open letter calling for categorical rejection of GenAI in reflexive qualitative research. De Paoli argues the rejection letter mistakes philosophical dogma for methodological argument and, worse, risks ceding the entire domain to computer scientists who will build these tools without input from qualitative researchers.

What it means

This is a short, sharp position paper — three published pages — published in Qualitative Inquiry shortly after Jowsey et al. circulated their open letter, which gathered 419 signatories from 32 countries calling for categorical rejection of generative AI in Big-Q reflexive qualitative research. De Paoli declines to sign. The paper is less interested in defending any particular AI practice than in attacking the logical structure of the rejection argument itself.

The core move is to reframe the debate. Jowsey et al. position categorical rejection as a principled methodological stance. De Paoli argues it is actually a philosophical position — a commitment to human exceptionalism — dressed up in methodological language. This is not a trivial distinction. Methodological arguments can be adjudicated by evidence; philosophical commitments about the irreducible nature of human meaning-making cannot. When a philosophical position presents itself as a methodological conclusion, it forecloses empirical inquiry before it begins.

The stakes, for De Paoli, are institutional and political as much as epistemological. If qualitative researchers collectively withdraw from AI-assisted research, the tools will still be built — by computer scientists and engineers who lack training in qualitative epistemology, reflexivity, or the distinctive commitments of interpretive social science. Categorical rejection is not neutrality; it is abdication.

The argument

Point 1: The Searle/Turing objection is philosophy, not methodology. The Jowsey letter grounds its rejection partly in claims about what AI can and cannot do — that it cannot make meaning, that it lacks the human capacities that reflexive research requires. De Paoli identifies this as a philosophical claim, traceable to Searle’s Chinese Room and broader debates in philosophy of mind. This may or may not be correct as philosophy, but it cannot do the methodological work the letter asks of it. Whether AI “really” understands or merely simulates understanding is a question that has been genuinely contested for decades; treating one side of that debate as settled methodology is question-begging.

Point 2: The Latour move — tools are not neutral but neither are they autonomous. Drawing on Actor-Network Theory, De Paoli argues that the letter’s framing treats AI as a monolithic agent that either replaces human meaning-making or doesn’t. But in ANT, tools are actants in sociotechnical networks; they shape and are shaped by the practices that surround them. The relevant question is not “does AI make meaning?” but “how do specific AI tools, used in specific ways, within specific research practices, transform what those practices produce?” This is an empirical question, not a philosophical one.

Point 3: The furniture analogy. De Paoli uses a pragmatist analogy: objecting to AI in qualitative research on the grounds that it is not human is like objecting to chairs in philosophy seminars on the grounds that thinking is a distinctly human activity and chairs do not think. The analogy is deliberately provocative. Its point is that the question of tool use is properly about what the tool makes possible or forecloses within a practice — not about the ontological status of the tool. Qualitative researchers use recorders, transcription software, NVivo, and ATLAS.ti; none of these “make meaning” either, yet none are rejected on those grounds.

The political argument. The paper closes with what is perhaps its most urgent point: the field of AI-assisted qualitative research will develop with or without qualitative researchers. If the community of experienced qualitative researchers opts out, the design, implementation, and deployment of AI tools for qualitative analysis will be left to those without training in the epistemological commitments those tools need to respect. Staying in the conversation — critically, reflexively, with full awareness of risks — is the responsible position.

Epistemological stance

Post-humanist and pragmatist. De Paoli rejects the human exceptionalism that underlies the Jowsey position — the claim that there is something irreducibly human about meaning-making that no tool can touch — while stopping well short of claiming that AI does or should replace human interpretation. The ANT framework treats humans and non-humans as symmetrically relevant actants in sociotechnical networks; this is categorically different from either the post-positivist position (AI as measurement instrument) or the interpretivist position (AI as assistant to human meaning-making). It refuses to grant human cognition privileged ontological status while also refusing to anthropomorphize AI.

The pragmatist dimension is equally important: De Paoli is not interested in resolving the philosophy of mind debate. He wants to keep open the empirical questions about what AI tools do in practice, and to resist closing those questions with philosophical commitments prematurely.

Rigor and trustworthiness

This is an opinion/position paper, not an empirical study; conventional validity criteria do not apply. The strength of the argument depends on the quality of the philosophical and theoretical moves, which are defensible but contestable. The Searle/Turing point is well-taken; the ANT move is appropriate but compressed; the furniture analogy is rhetorically effective but imprecise (recorders and NVivo are not designed to generate content, only to record and organize it). De Paoli does not address the specific empirical findings that motivate some skeptics — e.g., jowsey-frankenstein-ai-ta-2025's 58% fabricated quotes — which limits the reach of his argument for readers whose concerns are primarily practical rather than philosophical.

Limitations

  • Short form. At three pages, the argument is compressed to the point of incompleteness. Each of the three rebuttal points deserves more sustained development.
  • No engagement with empirical evidence. The paper argues philosophically but does not engage with the empirical literature on hallucination rates, cultural bias, or validity failures. Readers who are skeptical for empirical rather than philosophical reasons are not addressed.
  • Conflict of interest disclosure. De Paoli identifies as an active practitioner of AI-assisted qualitative research, which is disclosed but worth noting. The argument is structurally motivated by the author’s practice.
  • The furniture analogy limps. Chairs do not generate content. AI does. The analogy obscures the relevant disanalogy: AI produces outputs that look like analysis and can be mistaken for it in ways that a chair sitting in a room cannot. This is precisely what concerns critics like Jowsey.
  • The political argument cuts both ways. De Paoli argues that qualitative researchers must stay in the conversation to shape AI development. This is true. But it does not follow that any form of AI-assisted practice is therefore acceptable; critical engagement can include sustained, principled opposition.

Connections

  • Responds directly to: jowsey-et-al-2025-we-reject — the open letter that prompted this paper; the Jowsey et al. categorical rejection position is the target of every argument here
  • Contrasts with: jowsey-frankenstein-ai-ta-2025 — the empirical PLOS One study (different paper by overlapping authors) showing 58% fabricated quotes; De Paoli’s philosophical argument does not engage with this evidence
  • Shares post-humanist framing with: brailas-ai-qualitative-research-2025 — relational/constructionist; AI as co-constitutive rather than either tool or replacement
  • ANT framework connects to: friese-caai-framework-2026 — CAAI’s dialogic model treats AI as an interlocutor in a network, not merely a tool
  • Epistemological stakes map to: epistemology — the human exceptionalism debate; post-humanist vs. interpretivist positions
  • Central dispute documented in: contested-claims — Claim 2 (Big-Q compatibility) and the new debate about philosophical framing of rejection
  • Methodological pragmatism aligns with: carlsen-ralund-computational-grounded-theory-2022 — CALM’s emphasis on keeping the empirical questions open rather than resolving them philosophically
  • Positivism creep concern indirectly addressed: chatzichristos-ai-positivism-2025 — though De Paoli would likely respond that the solution is better practice, not withdrawal
  • Parallel editorial in the same debate: greenhalgh-2026-beyond-the-binary — also declined to sign; attacks the binary framing rather than the philosophical argument; proposes governance-oriented questions as the productive path forward