| url | https://doi.org/10.46743/2160-3715/2024.6637 |
|---|---|
| raw | raw/Dahal_How Can Generative AI (GenAI) Enhance or Hinder Qualitative Studi.pdf |
TL;DR: A critical appraisal from a researcher at Kathmandu University School of Education — the only Global South perspective in the corpus — examining GenAI’s potential for qualitative research as co-author, conversational platform, and research assistant, while centering acknowledgment, authorship, and equitable access. Distinctive for practicing the transparency it advocates: Dahal discloses that ChatGPT, Google Bard, and Bing Chat were all used in writing the paper, and specifies exactly how.
Problem
The overwhelming majority of literature on AI-assisted qualitative research is produced at North American, European, or Australian institutions. The concerns it articulates — reliability, validity, reflexivity, epistemological coherence — are real, but they are not the concerns of researchers working under different material and institutional conditions.
From a Global South perspective, the AI question looks different. The equity argument is not abstract: researchers at institutions without reliable high-speed internet, institutional subscriptions to premium AI tools, or access to English-language research communities face a fundamentally different set of opportunities and risks than their counterparts at resource-rich institutions. AI could lower some barriers (language support, literature access, methodological scaffolding) while raising others (English-language dominance, dependency on commercial tools, training data bias against non-Western contexts).
Dahal also addresses a problem that much of the corpus treats as secondary: the acknowledgment of AI use. How, specifically, should researchers in qualitative studies disclose that they used AI — not in the abstract but in terms of which tool, for which task, and how they evaluated and modified the output?
Approach
This is a critical appraisal commentary — theoretical and reflective rather than empirical. Dahal examines three roles for GenAI in qualitative research:
1. AI as co-author. The most contested framing. When AI contributes to conceptualization, structuring, writing, or analysis in a meaningful way, what are the implications for authorship, accountability, and credit? The academic community has not resolved this question, and different journals have adopted inconsistent policies. Dahal argues for explicit acknowledgment of AI’s specific contributions rather than either crediting AI as an author (which misrepresents its nature) or concealing its use (which violates transparency norms).
2. AI as conversational platform. Using AI as a dialogue partner for thinking through research problems, exploring interpretive possibilities, and challenging assumptions. This is the abductive/dialogic role developed more formally in friese-caai-framework-2026 and costa-abductivai-2025. Dahal approaches it from a practitioner perspective rather than a methodological framework perspective.
3. AI as research assistant. The most pragmatic framing: AI for literature review, coding assistance, theme identification, and writing support. The benefits (efficiency, accessibility) and risks (bias, hallucination, overreliance) are examined alongside the equity question: who has access to which AI tools, under what conditions, and what does differential access mean for research quality and career equity?
The paper’s most distinctive methodological contribution is its radical transparency about its own AI use. The acknowledgments section lists three separate AI tools with specific task descriptions:
- ChatGPT used to brainstorm and structure content
- Google Bard used to distill key themes from academic papers
- Bing Chat used to refine language and ensure consistent flow
This level of specificity — naming the tool, the task, and the nature of its contribution — is the model for AI acknowledgment that the paper advocates.
AI’s Role
AI is positioned as a multi-role resource requiring explicit acknowledgment — neither categorically appropriate nor categorically inappropriate, but requiring transparent documentation of each specific use. The three-role framework (co-author, conversational platform, research assistant) provides a more granular vocabulary for AI acknowledgment than generic “AI was used” statements.
The equity dimension is central: AI’s role is positive when it lowers access barriers for researchers from underrepresented contexts, and problematic when its language dominance (English-centric training data), cultural bias, or infrastructure requirements reproduce existing inequalities.
Epistemological Stance
Critical / post-colonial, grounded in the author’s institutional position at a Nepali university and shaped by the experience of doing research at the margins of the global academic system. The paper does not fit neatly into the small-q/Big-Q distinction — it is more concerned with who gets to do research and on what terms than with the epistemological foundations of any specific method.
The transparency stance reflects a pragmatist ethic: what matters is that AI use is honest, accountable, and disclosed in enough detail for readers to form their own judgment.
Rigor and Trustworthiness
The paper’s trustworthiness claim rests on the author’s positioned knowledge — writing from lived experience of doing qualitative research in Nepal provides access to concerns and considerations that outside-in accounts cannot replicate. The acknowledgment of AI use in the paper itself is not just transparency; it is an analytic decision to demonstrate what the paper advocates.
As a commentary, the paper does not make empirical claims requiring independent verification. Its contribution is perspective and practical guidance rather than data.
Limitations
The Global South perspective is valuable precisely because it is underrepresented, but “Global South” encompasses enormous diversity that a single Nepali perspective cannot capture. The concerns specific to Nepal (infrastructure constraints, Nepali-language research contexts, institutional conditions at Kathmandu University) may differ substantially from concerns in other Global South contexts.
The paper’s treatment of the epistemological and methodological concerns in the corpus is lighter than its treatment of the equity and acknowledgment concerns. Readers looking for deep engagement with reflexivity, validity, or epistemological coherence should look elsewhere (brailas-ai-qualitative-research-2025, nicmanis-spurrier-ai-guide-2025).
The specific guidance on acknowledgment — while practical and useful — reflects a moment in rapidly evolving institutional norms. Journal policies on AI acknowledgment are changing, and specific recommendations may become dated.
Connections
- llm-qualitative-research — the broader landscape; Dahal represents an underrepresented geographic and institutional perspective
- ai-research-ethics — acknowledgment, authorship, and transparency as ethical requirements; specific guidance on disclosure practice
- anis-french-ai-qualitative-research-2023 — the equity argument from the corpus; Dahal extends it from an abstract advocacy position to lived experience
- epistemic-flattening — cultural and linguistic dimensions; Nepali-language research contexts face specific AI limitations that English-centric training data creates
- sakaguchi-chatgpt-japanese-2025 — parallel evidence of AI performance limitations in non-English contexts; Japanese and Nepali are different languages but the structural concern is the same
- chatzichristos-ai-positivism-2025 — the structural inequality dimension the paper calls for; geography is one of the inequalities Chatzichristos identifies as needing study
- davison-ethics-genai-2024 — parallel ethics argument; Dahal provides the equity/access complement to Davison’s data ownership argument
What links here
- AI Research Ethics
- Anis & French (2023) — Efficient, Explicatory, and Equitable: Why Qualitative Researchers Should Embrace AI, but Cautiously
- Chatzichristos (2025) — Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?
- Contested Claims
- Dellafiore et al. (2025) — Artificial Intelligence in Qualitative Research: Insights From Experts via Reflexive Thematic Analysis
- Empirical Findings
- Epistemology — Stances Across the Literature
- AI in Qualitative Research
- Index
- Jowsey et al. (2025) — We Reject the Use of Generative AI for Reflexive Qualitative Research
- Qualitative AI Methods — A Living Taxonomy
- Sakaguchi et al. (2025) — Evaluating ChatGPT in Qualitative Thematic Analysis in the Japanese Clinical Context
- Validity and Trustworthiness
- Wheeler (2026) — Technological Reflexivity in Practice: How MAXQDA, NVivo, and ChatGPT Shape Qualitative Survey Analysis