Source
urlhttps://doi.org/10.1177/16094069251333886
rawraw/naeem-et-al-2025-thematic-analysis-and-artificial-intelligence-a-step-by-step-process-for-using-chatgpt-in-thematic.pdf

TL;DR: Provides specific ChatGPT prompt templates for all six of Braun & Clarke’s TA phases, including the familiarization phase that most prior guides skipped. The paper’s distinctive framing: structured AI prompting reduces human bias, improves accountability, and builds a transparent audit trail. This is the most operationally concrete prompt guidance in the corpus, and also the most epistemologically contested — bias reduction as a goal sits uneasily with reflexive TA’s view of researcher subjectivity as a resource.

Problem

Earlier AI-TA guides that aligned with Braun & Clarke’s six-phase framework shared a common gap: they skipped Phase 1, familiarization. This phase — deep, repeated reading of the dataset to gain holistic understanding before coding begins — was treated as irreducibly human, either because AI cannot “read” in the relevant sense or because the guidance assumed it couldn’t be improved by AI involvement.

Naeem et al. challenge this assumption. They argue that AI can be “familiarized” with the research context through contextual priming prompts that introduce the research questions, theoretical framework, methodological approach, and relevant background before coding tasks begin. Without this priming, AI coding is context-blind — generating patterns from surface features rather than theoretically grounded analysis.

The second problem: existing studies (Prescott et al. 2024, Bijker et al. 2024) had established that AI-TA is feasible, but the prompt strategies that made it feasible were either unpublished or described in too little detail to be replicated. Naeem et al. address this directly by publishing prompt templates.

Approach

The paper provides prompt templates for all six Braun & Clarke phases:

Phase 1 (Familiarization): Prompts that introduce the AI to the research study — research questions, data type, theoretical framework, methodological considerations. The goal is to establish interpretive context before any coding begins. This “contextual priming” approach differentiates GAITA from simpler prompting strategies.

Phase 2 (Generating initial codes): Prompts that direct AI to identify and label meaningful segments. The templates include explicit instructions about coding granularity and examples of appropriate code format.

Phase 3 (Searching for themes): Prompts for clustering codes into candidate themes. The templates instruct AI to group by shared meaning rather than lexical similarity — a distinction that targets the most common AI failure mode in thematic synthesis.

Phase 4 (Reviewing themes): Prompts for checking theme coherence against the dataset. The templates include instructions for the AI to return to specific data segments to verify that themes are grounded.

Phase 5 (Defining and naming themes): Prompts for generating theme names and definitions. The templates request names that capture interpretive meaning, not just descriptive content.

Phase 6 (Writing up): Prompts for structuring analytic narratives. These are more advisory than prescriptive — the writing stage requires the most human judgment.

The accountability framing is explicit: each prompt at each stage creates a documented record of what the AI was asked to do and what it produced. This audit trail, the paper argues, reduces unacknowledged human bias by externalizing interpretive decisions and making them reviewable.

AI’s Role

AI is positioned as a systematic co-analyst under explicit researcher direction — the paper’s language is more assertive about AI’s contribution than most guides. Where Christou (2024) emphasizes AI as a subordinate tool, Naeem et al. frame AI as a systematic partner whose structured involvement reduces bias.

The bias-reduction claim is the paper’s most distinctive and contested position. By externalizing interpretive decisions through prompts and recording AI output at each stage, the analysis becomes more transparent — not because AI is objective, but because the decisions are documented and reviewable rather than occurring invisibly in the researcher’s mind.

Epistemological Stance

Post-positivist with an explicit accountability orientation. The bias-reduction framing reflects a post-positivist aspiration: that systematic, documented procedure reduces the uncontrolled influence of researcher subjectivity. This is in direct tension with reflexive TA’s epistemological commitment, which holds that researcher subjectivity is not a bias to be reduced but a constitutive resource.

The paper does not engage with this tension. It is written from within a framework where bias reduction and transparency are unambiguously positive — a stance that will read very differently to Big-Q researchers than to small-q or post-positivist ones.

Rigor and Trustworthiness

The most concrete prompting documentation in the corpus — the templates are published in enough detail to be adapted and tested. This transparency is itself a methodological contribution: it allows other researchers to evaluate and refine the prompts rather than starting from scratch.

The accountability claim has empirical merit within its frame: documented prompts at each phase do create a more inspectable record than undocumented human coding. Whether this constitutes “bias reduction” in any meaningful sense depends on epistemological commitments the paper assumes rather than argues for.

Limitations

The bias-reduction claim is philosophically underdeveloped. Documenting AI prompts does not eliminate researcher influence — the choice of prompts, the decisions about which AI outputs to accept or reject, and the final interpretation all remain human decisions shaped by the researcher’s assumptions. The paper’s claim requires a stronger epistemological argument than it provides.

The templates are designed for systematic (small-q) thematic analysis. Researchers working in reflexive TA (Big-Q) will find them incompatible with their methodological commitments — the structured, bias-reducing orientation conflicts with the deliberate subjectivity of reflexive practice (brailas-ai-qualitative-research-2025, xu-ai-thematic-analysis-2026).

The paper does not test the prompts on real data and report results. The templates are presented as practical guidance without empirical validation of their effectiveness. Whether these specific prompts produce better thematic analysis than alternative prompts is an open question.

Connections