Source
urlhttps://doi.org/10.1177/16094069251354863
rawraw/nicmanis-spurrier-2025-getting-started-with-artificial-intelligence-assisted-qualitative-analysis-an-introductory-guide.pdf

TL;DR: A systematic primer for researchers new to qualitative work, showing how the small-q/Big-Q distinction should govern AI method choices. Proposes an approach-based model that maps research values to appropriate AI applications. One of the few papers in the corpus that treats paradigm alignment as a prerequisite for AI method selection rather than an afterthought.

Problem

The AI-assisted qualitative research literature has an audience problem. Most papers are written for methodologically experienced researchers who already have stable epistemological commitments and need guidance on how to incorporate AI without abandoning them. The guidance for researchers entering qualitative work for the first time — or for experienced quantitative researchers crossing over — is thin.

This matters because AI has lowered the technical barrier to qualitative analysis. A researcher with no qualitative training can ask ChatGPT to “find themes” in an interview dataset and receive plausible-looking output. Without understanding what qualitative research is epistemologically for — and how different paradigms answer that question differently — such researchers cannot evaluate whether their AI-assisted output is methodologically sound or a convincing fabrication.

Nicmanis & Spurrier address this gap directly: their paper is not about whether AI can do qualitative analysis, but about what a researcher needs to understand before deciding how to use AI in their qualitative work.

Approach

The paper is organized around the small-q / Big-Q distinction, which it treats as foundational rather than contextual:

Small-q (positivist/post-positivist). Qualitative data collection tools and analysis methods are applied within a framework that assumes reality can be measured. Coding produces reliable, generalizable categories. Intercoder agreement is a valid quality criterion. AI fits naturally here: it can apply codes consistently, be measured against human coders, and evaluated using kappa or similar metrics.

Big-Q (interpretivist/constructionist/reflexive). Qualitative techniques operate within a framework that foregrounds situated knowledge, co-constructed meaning, and researcher reflexivity. Coding is not mechanical but interpretive. Reliability is not the relevant quality criterion — trustworthiness, reflexive accountability, and coherence are. AI integration requires fundamentally different design: it must be subordinated to, and critically interrogated by, a researcher whose positionality shapes the analysis.

The approach-based model the paper proposes maps this distinction onto AI application decisions: what is your paradigmatic commitment? → what counts as quality in your framework? → what role can AI appropriately play? This three-level mapping produces different answers for small-q and Big-Q researchers at every step.

The worked example uses reflexive content analysis (Braun & Clarke’s framework) to illustrate both pathways concretely: small-q application (AI generates codes; researcher checks reliability) vs. Big-Q application (AI proposes candidate codes; researcher interrogates them reflexively; no reliability metric because subjectivity is the epistemic resource, not the problem).

AI’s Role

AI is positioned as a paradigm-dependent instrument: what AI can legitimately do depends entirely on the epistemological framework within which it is used. There is no paradigm-independent answer to “can AI assist with qualitative analysis?” — only answers relative to what the researcher is trying to achieve.

This is a more philosophically careful framing than most of the corpus. Rather than asserting that AI is always a “tool under researcher direction” or always epistemologically suspect, the paper holds both possibilities open and connects them to prior theoretical commitments.

Epistemological Stance

Explicitly paradigm-pluralist. The paper treats both small-q and Big-Q as legitimate research traditions with their own internal standards. It does not advocate for one over the other — its goal is alignment, not conversion. This stance makes it accessible to a broad audience, including researchers working in fields where positivist assumptions are dominant (health sciences, psychology, education) as well as those in interpretive traditions.

The paper draws heavily on Braun & Clarke’s (2022) distinction and on Lincoln et al.'s (2011) paradigm framework, situating itself within an established methodological conversation rather than proposing a novel epistemological position.

Rigor and Trustworthiness

As a theoretical and methodological guide rather than an empirical study, the paper’s quality claim is coherence and pedagogical usefulness. The approach-based model is internally consistent and the worked examples are clear enough to be applied by a reader with basic qualitative methods literacy.

The paper is honest about its scope: the examples are “exploratory” and “not intended as definitive guides.” It points readers to the empirical literature (citing bijker-chatgpt-qca-2024, among others) for evidence about specific AI performance.

Limitations

The paper covers the paradigm-alignment problem comprehensively, but its worked examples are brief. Researchers who understand the small-q/Big-Q distinction but need detailed operational guidance — specific prompts, evaluation procedures, audit trail formats — will need to look elsewhere.

The reflexive content analysis example, while useful, is one of many qualitative approaches. The framework’s applicability to grounded theory, discourse analysis, IPA, or ethnographic approaches is not demonstrated, though it is implicitly portable.

The ethics section, while present, is relatively conventional: data privacy, informed consent, transparency in reporting. It does not engage with the more philosophically challenging ethics questions raised by brailas-ai-qualitative-research-2025 (what does it mean to outsource interpretation?) or davison-ethics-genai-2024 (what are the structural equity implications of AI adoption?).

Connections