| TL;DR | Using AI in qualitative research raises overlapping ethical issues around data privacy, informed consent, authorship, algorithmic bias, and transparency — all more serious than they appear at first. |
|---|
Data privacy and re-identification
Qualitative data — especially interview transcripts — is hard to truly anonymize. Research shows that as few as 15 demographic attributes can re-identify 99%+ of individuals. When sensitive transcripts are uploaded to commercial AI APIs, researchers lose control over:
- How data is processed or stored
- Whether the model uses data for further training
- The risk of re-identification through contextual details
This is especially acute for vulnerable populations: trauma survivors, undocumented individuals, people with stigmatized health conditions. (brailas-ai-qualitative-research-2025) argues that for such groups, using AI in analysis should be avoided or approached with extreme caution.
Informed consent must be specific
Generic consent (“AI tools were used in the analysis”) is insufficient. Participants must be informed of:
- Which specific AI systems will process their data
- At which stages of analysis
- What privacy protections are in place (anonymization prior to upload, etc.)
- The realistic risks of re-identification
The evolving nature of AI law and policy adds a temporal dimension: transcripts about topics that are legal at the time of interview may become sensitive if the legal landscape shifts.
Authorship and interpretive responsibility
(anis-french-ai-qualitative-research-2023) and (brailas-ai-qualitative-research-2025) both emphasize: AI cannot be accorded authorship or interpretive authority. The human researcher must retain control over the interpretive repertoire — the values, assumptions, and theoretical framings that structure analysis. AI automates the mechanical; interpretation remains with the human.
This is not just a normative claim. Practically, delegating interpretation to AI means delegating responsibility for errors, biases, and omissions to an opaque system.
Algorithmic bias
AI systems encode the biases present in their training data. (anis-french-ai-qualitative-research-2023)'s Amazon example: a hiring algorithm penalized “women’s chess club captain” as evidence of leadership. For qualitative research, this means:
- AI will systematically underrepresent perspectives outside dominant cultural patterns
- Researchers doing critical work (challenging social norms, foregrounding marginalized voices) must actively design coding schemes to counteract this
- Treating AI as neutral is itself a form of bias — see epistemic-flattening
Transparency and reproducibility
(bennis-ai-thematic-analysis-2025) calls for standardized reporting checklists covering full process transparency. (brailas-ai-qualitative-research-2025) recommends keeping detailed prompt logs in appendices. Key transparency requirements:
- Which model(s) were used, and at which version
- The full prompts used (or at least their structure)
- How AI output was evaluated and validated
- Where human judgment overrode AI output
See also
- brailas-ai-qualitative-research-2025 — most detailed ethical treatment
- anis-french-ai-qualitative-research-2023 — authorship and bias pitfalls
- bennis-ai-thematic-analysis-2025 — transparency and reproducibility
- bijker-chatgpt-qca-2024 — mentions ethics but focuses on reliability
- davison-ethics-genai-2024 — data ownership; ATLAS.ti data-for-training incident
- dahal-genai-qualitative-nepal-2024 — equity, access, and radical transparency about AI use
- montrosse-moorhead-ai-evaluation-2023 — the equity criterion as the most underemphasized
- jowsey-et-al-2025-we-reject — environmental and social justice harms of GenAI as grounds for rejection; Global South data worker exploitation
- epistemic-flattening — the epistemological risk connected to bias
- llm-qualitative-research — broader landscape
- validity-trustworthiness — ethical dimensions of validity failures
- contested-claims — hallucination as ethical issue (Claim 7); marginalized voice misrepresentation (Claim 5)
- dellafiore-et-al-2025-expert-interviews — practitioner-level ethics: no universal ethical code; personal moral reflection deemed insufficient; governance gap documented by expert researchers themselves; ecological sustainability concerns prominent
What links here
- Andrews, Fainshmidt & Gaur (2026) — Progress or Perish: IB and AI Adoption
- Anis & French (2023) — Efficient, Explicatory, and Equitable: Why Qualitative Researchers Should Embrace AI, but Cautiously
- Christou (2023) — How to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research?
- Christou (2024) — Thematic Analysis through Artificial Intelligence (AI)
- Contested Claims
- Dahal (2024) — How Can Generative AI Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal
- Davison et al. (2024) — The Ethics of Using Generative AI for Qualitative Data Analysis
- Dellafiore et al. (2025) — Artificial Intelligence in Qualitative Research: Insights From Experts via Reflexive Thematic Analysis
- Epistemic Flattening
- Fischer & Biemann (2024) — Exploring Large Language Models for Qualitative Data Analysis
- Friese, Nguyen-Trung, Powell & Morgan (2026) — Beyond Binary Positions
- Greenhalgh (2026) — Reflexive Qualitative Research and Generative AI: A Call to Go Beyond the Binary
- AI in Qualitative Research
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- Jowsey et al. (2025) — Frankenstein, Thematic Analysis and Generative AI: Quality Appraisal Methods
- LLMs for Qualitative Research
- Montrosse-Moorhead (2023) — Evaluation Criteria for Artificial Intelligence
- Naeem et al. (2025) — Thematic Analysis and Artificial Intelligence: A Step-by-Step Process for Using ChatGPT
- Nicmanis & Spurrier (2025) — Getting Started with AI-Assisted Qualitative Analysis: An Introductory Guide
- Perkins & Roe (2024) — The Use of Generative AI in Qualitative Analysis: Inductive Thematic Analysis with ChatGPT
- Reeping et al. (2025) — Interrogating the Use of LLMs in Qualitative Research Using the Q3 Framework
- Salazar et al. (2025) — Comparison of Qualitative Analyses Conducted by Artificial Intelligence Versus Traditional Methods
- Validity and Trustworthiness
- Wheeler (2026) — Technological Reflexivity in Practice: How MAXQDA, NVivo, and ChatGPT Shape Qualitative Survey Analysis
- Zhang et al. (2025) — Harnessing the Power of AI in Qualitative Research: Exploring, Using and Redesigning ChatGPT