Source
TL;DRUsing AI in qualitative research raises overlapping ethical issues around data privacy, informed consent, authorship, algorithmic bias, and transparency — all more serious than they appear at first.

Data privacy and re-identification

Qualitative data — especially interview transcripts — is hard to truly anonymize. Research shows that as few as 15 demographic attributes can re-identify 99%+ of individuals. When sensitive transcripts are uploaded to commercial AI APIs, researchers lose control over:

  • How data is processed or stored
  • Whether the model uses data for further training
  • The risk of re-identification through contextual details

This is especially acute for vulnerable populations: trauma survivors, undocumented individuals, people with stigmatized health conditions. (brailas-ai-qualitative-research-2025) argues that for such groups, using AI in analysis should be avoided or approached with extreme caution.

Generic consent (“AI tools were used in the analysis”) is insufficient. Participants must be informed of:

  • Which specific AI systems will process their data
  • At which stages of analysis
  • What privacy protections are in place (anonymization prior to upload, etc.)
  • The realistic risks of re-identification

The evolving nature of AI law and policy adds a temporal dimension: transcripts about topics that are legal at the time of interview may become sensitive if the legal landscape shifts.

Authorship and interpretive responsibility

(anis-french-ai-qualitative-research-2023) and (brailas-ai-qualitative-research-2025) both emphasize: AI cannot be accorded authorship or interpretive authority. The human researcher must retain control over the interpretive repertoire — the values, assumptions, and theoretical framings that structure analysis. AI automates the mechanical; interpretation remains with the human.

This is not just a normative claim. Practically, delegating interpretation to AI means delegating responsibility for errors, biases, and omissions to an opaque system.

Algorithmic bias

AI systems encode the biases present in their training data. (anis-french-ai-qualitative-research-2023)'s Amazon example: a hiring algorithm penalized “women’s chess club captain” as evidence of leadership. For qualitative research, this means:

  • AI will systematically underrepresent perspectives outside dominant cultural patterns
  • Researchers doing critical work (challenging social norms, foregrounding marginalized voices) must actively design coding schemes to counteract this
  • Treating AI as neutral is itself a form of bias — see epistemic-flattening

Transparency and reproducibility

(bennis-ai-thematic-analysis-2025) calls for standardized reporting checklists covering full process transparency. (brailas-ai-qualitative-research-2025) recommends keeping detailed prompt logs in appendices. Key transparency requirements:

  • Which model(s) were used, and at which version
  • The full prompts used (or at least their structure)
  • How AI output was evaluated and validated
  • Where human judgment overrode AI output

See also