Source
urlhttps://doi.org/10.46743/2160-3715/2024.7046
rawraw/Christou_Thematic Analysis through Artificial Intelligence (AI).pdf

TL;DR: A practitioner how-to guide for incorporating AI into each phase of thematic analysis, written for novice analysts in The Qualitative Report. The central contribution is not the endorsement of AI (many papers make that case) but the specific attention to documentation: how to record AI use at every phase, what to include in research memos and methods sections, and how to maintain an audit trail that makes AI’s contributions legible and accountable.

Problem

The wave of papers on AI-assisted thematic analysis in 2023–2024 produced endorsements, frameworks, benchmarks, and critiques — but relatively little practical guidance on how to actually do it and how to report it. Researchers, particularly those new to qualitative work, were left to figure out phase-by-phase integration through trial and error.

This matters for two reasons. First, novice analysts are the most likely to over-rely on AI output without the interpretive background to evaluate or challenge it. Second, even experienced analysts who use AI appropriately face a documentation challenge: methods sections in qualitative research typically describe what the researcher did, not what an AI did. When AI is integrated into each phase, the reporting conventions are unclear.

Christou’s paper fills both gaps: practical guidance on integration, and explicit attention to documentation requirements.

Approach

The paper is structured around the standard phases of thematic analysis (following Braun & Clarke), providing specific guidance at each:

Data familiarization. AI can assist with transcription and initial summarization. Christou is direct: even with AI transcription, the researcher must immerse personally in the data. Familiarization cannot be delegated because the interpretive attunement it produces cannot be transferred through reading a summary.

Code generation. AI generates candidate codes quickly. The analyst evaluates, accepts, refines, or rejects each one. Christou recommends treating AI codes as hypotheses, not findings: they may point toward patterns worth attending to, but require interpretive validation before being adopted.

Theme development. AI can cluster codes and propose groupings. The analyst interrogates the clustering logic — does this grouping reflect a genuine pattern in the data, or a superficial lexical similarity? This is the phase where AI is most likely to produce plausible-looking output that is analytically shallow.

Theme review and refinement. Must remain human-led. Christou is explicit: AI cannot assess whether proposed themes adequately represent the data, because this requires the researcher’s contextual knowledge of the full dataset and its analytic relationship to the research questions.

Defining and naming themes. AI can draft labels; the analyst must ensure they capture interpretive intent rather than mere description. A theme name is a claim about meaning, not a category label.

Writing up. AI can assist with structure and flow; the analyst retains authorial voice and interpretive responsibility for every claim made.

AI’s Role

AI is positioned as a complementary tool under analyst authority — the language Christou uses is cautious but not restrictive. The emphasis throughout is on the analyst’s critical evaluative and interpretive skills as non-negotiable. AI enhances depth and breadth when criteria are adhered to; without those criteria, it overshadows the analyst’s judgment rather than supporting it.

The documentation emphasis is the paper’s most important practical contribution. Christou argues that transparency about AI use is not just an ethical formality — it is part of the audit trail that establishes trustworthiness in qualitative research. Without documentation of what AI proposed and what the researcher did with those proposals, readers cannot evaluate the analysis.

Epistemological Stance

Conservative and implicitly post-positivist. The paper does not engage with the small-q/Big-Q distinction (nicmanis-spurrier-ai-guide-2025) or with constructionist epistemologies. Its guidance is pitched at a level of generality that makes it broadly applicable — the documentation practices recommended are appropriate whether the researcher is doing reflexive TA or reliability-focused coding.

This breadth is a design feature, not an oversight: the target audience is novice analysts, who need foundational guidance before epistemological fine-tuning.

Rigor and Trustworthiness

The paper’s own rigor rests on its practical coherence and the relevance of its guidance to the problems novice analysts actually face. It synthesizes the emerging literature on AI-TA thoughtfully and provides guidance that is actionable rather than merely aspirational.

The documentation recommendations are the most methodologically distinctive contribution: recording AI use in research memos at each phase, describing AI’s role explicitly in methods sections, and maintaining an audit trail that makes the human-AI division of labor transparent and reviewable. These practices are not standard in the literature, and Christou’s operationalization of them is genuinely useful.

Limitations

The how-to format limits the paper’s engagement with the cases where its guidance breaks down. What happens when AI-generated codes are systematically biased by training data? What does the researcher do when they cannot identify the source of an AI proposal’s inadequacy? The guidance assumes the analyst can detect AI’s failures — which requires the interpretive competence the paper acknowledges novice analysts may not yet have.

The paper does not distinguish between AI systems or address how different models perform differently at different phases. The guidance is written for ChatGPT-like conversational models but does not discuss CAQDAS-embedded AI (MAXQDA AI Assist, NVivo AI Assistant) or the methodological differences they introduce (wheeler-technological-reflexivity-2026).

The documentation guidance, while valuable, is not connected to specific reporting standards or journal requirements. Researchers following the paper’s advice may still face reviewer resistance if their journals do not have established norms for AI use disclosure.

Connections