Source
urlhttps://doi.org/10.1145/3663433.3663456
rawraw/Sinha_The Role of Generative AI in Qualitative Research_ GPT-4_s Contributions to a Grounded Theory Analysis.pdf

TL;DR: A reflective vignette-based account of using GPT-4 in open and focused coding for a grounded theory study of responsive teaching. Rather than reporting reliability statistics, the paper documents pivotal moments in the human-AI analytic collaboration — where AI contributed productively and where its outputs prompted deeper scrutiny of the data. One of the few papers in the corpus to engage specifically with grounded theory rather than thematic analysis.

Problem

Grounded theory (GT) occupies a peculiar position in the AI-TA literature: it is frequently cited but rarely studied directly. Most empirical benchmarks use thematic analysis; most framework papers adapt thematic analysis procedures. The assumption that findings about AI’s capabilities in TA transfer to GT is rarely examined.

GT creates specific conditions for AI involvement. Its core logic — constant comparison, theoretical sampling, iterative movement between data and emerging theory — requires ongoing analytic judgment, not just pattern identification. The question is not whether AI can generate codes (it can) but whether AI can participate productively in the kind of iterative, theory-building analytic process that GT requires.

Sinha et al. address this through a reflective account of a specific study: a grounded theory analysis of a teacher’s “talk moves” during a theory-building lesson with middle school students. The research goal — building a nuanced conceptualization of responsive teaching — requires fine-grained, theoretically grounded coding of classroom discourse. This is a demanding test case for AI involvement.

Approach

The paper uses a vignette methodology — detailed narrative accounts of specific moments in the analytic process where GPT-4’s involvement made a difference. This is methodologically distinctive in the AI-TA corpus: most papers report aggregate statistics or procedural descriptions; Sinha et al. document the texture of human-AI collaboration as it unfolds.

GPT-4 (accessed through Microsoft Co-pilot) was involved in:

  • Open coding (initial exploratory stage): Generating first-pass codes from classroom transcript segments
  • Focused coding (later stages): Comparing and refining codes as the analysis progressed

The analytic record consisted of researcher notes, analytic memos, and video recordings of team meetings discussing insights in response to GPT-4’s input. This record became the data for the vignette analysis — the researchers studied their own analytic process.

Three types of pivotal moments are documented in vignettes:

  1. GPT-4 generating a code the team had overlooked, prompting productive discussion
  2. GPT-4 producing a code that seemed wrong, triggering deeper examination of the transcript
  3. GPT-4 enabling faster comparison of code definitions across sessions, supporting the constant comparison GT requires

AI’s Role

AI is positioned as a productive analytic partner in grounded theory — not a coder that produces outputs to be checked, but an interlocutor whose responses (including wrong or unexpected ones) advance the team’s analytic thinking. The second pivotal moment type — GPT-4 producing a wrong code that triggered productive scrutiny — is particularly valuable: it shows that AI “errors” can be analytically generative rather than just problematic.

This is the anis-french-ai-qualitative-research-2023 “explicatory via failure” argument instantiated in grounded theory practice.

The team retained full analytic authority. AI’s contributions were filtered through researcher judgment at every stage — accepted, rejected, or used as provocations for further inquiry.

Epistemological Stance

Interpretivist / grounded theory, within an education research context. The paper works within Charmaz’s constructivist grounded theory tradition rather than the classical objectivist version — meaning is co-constructed between researcher and data, not discovered within it. This epistemological commitment shapes how AI’s contributions are evaluated: not as objective discoveries but as interactional prompts that advance the team’s interpretive work.

The vignette methodology is epistemologically consistent: it treats the analytic process as data worthy of systematic reflection, not just a means to an end.

Rigor and Trustworthiness

The vignette approach produces thick description of human-AI collaboration that aggregate statistics cannot capture. Readers can assess the quality of the collaborative process — whether the AI’s contributions were used critically, whether the team’s analytic judgments were defensible — in a way that κ scores do not enable.

The use of multiple data sources (notes, memos, meeting recordings) for the vignette analysis provides triangulation within the reflective study itself. This is methodologically careful.

The analysis is explicitly preliminary — the grounded theory study was ongoing at time of publication. The paper reports on the open coding stage, not the full GT process. Final claims about the study’s findings are reserved for future publication.

Limitations

The vignette methodology, while methodologically rich, is not replicable in the conventional sense. Two teams using GPT-4 with the same data would have different pivotal moments because their interpretive starting points, research questions, and team dynamics differ. The paper documents one collaborative experience, not a generalizable procedure.

The specific domain (responsive teaching, classroom discourse) is highly specialized. AI’s behavior in this context — where coding requires understanding pedagogical theory and classroom interaction patterns — may not generalize to other GT applications.

The paper reports on open coding only. Whether GPT-4’s contributions remain productive in the more abstract, theory-building stages of GT (focused coding, theoretical saturation, selective coding) is an open question.

Connections