| url | https://doi.org/10.46743/2160-3715/2023.6406 |
|---|---|
| raw | raw/Christou_How to Use Artificial Intelligence (AI) as a Resource Methodolog.pdf |
TL;DR: An early mid-2023 practitioner guide establishing five considerations for rigorous AI use in qualitative research: familiarizing yourself with AI-generated data, removing biased content, cross-referencing AI output, controlling the analysis process, and demonstrating cognitive input. The most important of these is the last: the researcher’s active interpretive engagement is the non-negotiable constant. This is the foundational Christou paper; his 2024 guide on TA applies these principles phase by phase.
Problem
By mid-2023, the initial wave of discourse about ChatGPT in research had moved beyond plagiarism panic to a more substantive question: can AI be used responsibly in qualitative research, and if so, how? Papers were appearing that used AI for literature reviews, systematic reviews, content analysis, and thematic analysis — but without shared standards for what responsible AI use looked like.
Christou’s problem is normative and practical: the field needed guidance on what criteria should govern AI use in qualitative research, grounded in the specific challenges that AI creates — unreliable output, systematic bias, hallucination, and the risk of the researcher’s interpretive agency being displaced by AI’s pattern recognition.
The timing is significant: published six months into the post-ChatGPT era, this paper is among the first substantive guidance documents in the qualitative research literature. It stakes out a position before the field has developed extensive empirical evidence, which means its five considerations are based on theoretical analysis and early practitioner experience rather than established research.
Approach
The paper examines AI use in research across several applications (literature reviews, conceptual work, thematic analysis, content analysis) before distilling five key considerations:
1. Become acquainted with AI-generated data. Understand what the AI system is doing, not just what it produces. Treat AI outputs as products of specific processes — training data, model architecture, prompt design — not as objective analysis. Don’t treat output as a black box.
2. Remove biased content and address ethical concerns. Actively scan AI output for systematic bias before incorporating it into analysis. Bias in AI is not random noise — it reflects patterns in training data and may be systematic and directional. The researcher’s responsibility is to identify and document where bias appears and what was done about it.
3. Cross-reference AI-generated information. Verify AI claims, codes, and quotes against the raw data and other sources. AI can hallucinate, oversimplify, or attribute statements to participants who did not make them. This is not a peripheral concern — fabricated evidence in qualitative research is a research integrity failure.
4. Control the analysis process. The researcher maintains authority over all analytical decisions. AI assists; it does not decide. When AI produces a code or theme, the researcher’s judgment about whether to accept, modify, or reject it is the analytic act. AI output without researcher judgment is not analysis.
5. Demonstrate cognitive input and skills. The most important consideration: researchers must show their own interpretive reasoning throughout the process, not merely delegate to AI. This is both a quality criterion (the researcher’s interpretive contribution is what makes qualitative analysis meaningful) and an ethical requirement (claiming authorship of AI-generated content without acknowledgment violates research integrity norms).
AI’s Role
AI is positioned as a multi-role resource — it can serve as a tool (for mechanical tasks), a methodological aid (for managing large datasets), and an analysis assistant (for pattern identification). But in all roles, the researcher’s cognitive engagement is primary. AI that “completes” tasks while the researcher remains passive is not AI-assisted research — it is AI-substituted research, which raises both quality and integrity concerns.
The distinction Christou draws between tool, method, and analysis is useful for thinking about where AI can be safely integrated and where it requires closest scrutiny. Analysis — the interpretive work — is where AI assistance requires the most active researcher oversight.
Epistemological Stance
Conservative and broadly applicable. The paper does not engage with epistemological distinctions between positivist, interpretivist, or constructionist traditions. Its five considerations are pitched at a level of generality applicable to qualitative research across paradigms — what any researcher using AI should do, regardless of their methodological commitments.
This breadth is both the paper’s strength and its limitation. It speaks to all researchers but provides the paradigm-specific guidance that, for example, Big-Q researchers need — guidance that brailas-ai-qualitative-research-2025 and nicmanis-spurrier-ai-guide-2025 provide more specifically.
Rigor and Trustworthiness
Published early in the field’s development, the paper cannot draw on extensive empirical evidence. Its authority rests on the coherence of its analysis and the relevance of its concerns — both of which have proved durable. The five considerations have been echoed and elaborated by subsequent literature without being substantially challenged.
The paper’s most credible move is the emphasis on cognitive input and skills as the irreducible criterion: regardless of which AI tools are used or how, the researcher’s demonstrated engagement with the material is what makes the research genuine. This criterion is both practically verifiable and epistemologically grounded.
Limitations
The theoretical basis — while principled — is not empirically tested. By the time of publication, the field had little systematic evidence about how bias manifests, how often hallucination occurs, or what “control” of the analysis process looks like in practice. The guidance is prescriptive rather than evidence-based.
The five considerations, while useful as principles, lack operational specificity. “Remove biased content” is correct advice, but how? What signals indicate AI bias? How much of the output can be retained after removing it? These questions are not addressed.
The paper does not distinguish between AI systems or types — ChatGPT, Bard, specialized qualitative AI tools — that have different capability and bias profiles. By 2024–2025, model differences had become practically significant and warranted specific guidance.
Connections
- llm-qualitative-research — broader landscape; Christou (2023) is among the foundational guidance documents
- ai-research-ethics — the second and fifth considerations are primarily ethics concerns: bias removal and cognitive input are necessary for ethical AI use in research
- christou-ta-through-ai-2024 — the follow-up paper applying these five considerations to each phase of thematic analysis; read together as a two-part guide
- prompt-engineering — cross-referencing and control are practically implemented through careful prompting; the 2023 paper identifies the goal, the 2024 paper operationalizes it
- anis-french-ai-qualitative-research-2023 — parallel early guidance document; compare the two frameworks for first-wave AI guidance in qualitative research
- jowsey-frankenstein-ai-ta-2025 — empirical evidence of what happens when Christou’s third consideration (cross-referencing) is not followed: 58% fabricated quotes
- validity-trustworthiness — the five considerations are a trustworthiness framework embedded in practical guidance