Source
urlhttps://doi.org/10.1016/j.jwb.2025.101689
rawraw/andrews_ed.pdf

TL;DR: An editorial from the Journal of World Business arguing that international business scholarship must embrace AI or cede influence. More provocative than the qualitative methods literature and from a different disciplinary culture, it introduces the concept of “AI shaming” and makes a market-based argument for AI adoption that the qualitative research community would find alien — but that represents a real position in the broader scholarly conversation about AI in research.

Disciplinary Context

This source sits outside the qualitative methods literature that forms the core of this wiki. Andrews, Fainshmidt & Gaur write from the International Business (IB) and management research tradition — primarily quantitative, hypothesis-driven, journal-impact-oriented. Their concerns (citation metrics, publication velocity, competitive disadvantage relative to AI-using competitors) are not the qualitative researcher’s concerns about interpretive validity and reflexive depth.

This disciplinary gap matters. The paper should be read as a document of how a different research community frames the AI adoption question rather than as a contribution to the methodological debates about AI in qualitative research. It represents the pole of the scholarly AI debate that the qualitative methods literature is implicitly arguing against when it insists on epistemological responsibility.

Problem

The editorial’s problem statement is competitive and institutional: if IB scholars don’t adopt AI tools, they will produce less research, at lower velocity, than AI-using colleagues and competitors. Top journals will increasingly receive AI-assisted manuscripts; editorial policies that restrict AI use will disadvantage researchers who comply. The implicit threat is market selection: adapt or become marginal.

This framing is structurally different from the qualitative debate. Where the qualitative literature asks “does AI-assisted research produce valid findings?”, the IB editorial asks “will AI-using researchers outcompete non-AI-using researchers?” Validity is not the primary concern; competitiveness is.

Key Concepts

AI shaming (Giray 2024): Systematic devaluation of work that acknowledges AI assistance, creating incentives for concealment. The authors argue that editorial policies implicitly or explicitly discouraging AI use produce AI shaming cultures — where researchers hide their AI use rather than disclosing it. This phenomenon has been documented empirically in the qualitative literature: dellafiore-et-al-2025-expert-interviews found Italian qualitative researchers reporting a “culture of concealment” around AI use. The dynamics Giray identifies appear cross-disciplinary.

Co-evolutionary development: AI tools and scholarly practices co-evolve; restricting AI doesn’t preserve current scholarly practices but instead prevents adaptation to changing epistemic conditions. The authors draw on historical analogies (adoption of statistical software, literature databases) to argue that AI is the latest in a long line of technological augmentations that initially generated resistance before becoming normative.

Distributed intelligence: Drawing on distributed cognition literature (the same intellectual tradition that Friese et al. friese-et-al-beyond-binary-2026 use), the paper argues that scholarly knowledge is already produced through human-tool assemblages; AI is an extension of this distributed intelligence, not a rupture with it.

Complementary specialization: Humans and AI have different strengths; optimal research assigns tasks to the system (human or AI) best suited to perform them. This is the efficiency-and-quality argument, supported empirically by Dell’Acqua et al. (2023): AI-assisted consultants completed 12% more tasks, 25% faster, at 40% higher quality than unassisted controls.

Adaptive evolution: Research communities that fail to adapt to environmental changes decline. The “progress or perish” framing deliberately echoes “publish or perish” — the same market logic that drives publication pressure.

Empirical Evidence

Two studies provide the quantitative backbone:

  1. Dell’Acqua et al. (2023): BCG consultants randomized to AI or no-AI condition. AI users: +12% tasks completed, +25% speed, +40% quality ratings. The authors treat this as generalizable to scholarly work — a significant inferential leap.

  2. Filimonovic et al. (2025): GenAI adoption associated with moderate gains in publication quality, especially for early-career researchers and non-English speakers. If true, this is a significant equity finding — but the causal mechanism is not established, and publication “quality” (presumably measured by journal rankings or citations) is a metric the qualitative community would resist.

The “False Dichotomy” Claim

The editorial’s central argument is that “AI use vs. research quality” is a false dichotomy. Like de-paoli-reject-rejection-2026 and friese-et-al-beyond-binary-2026, Andrews et al. contest binary framing — but they do so on different grounds. Where De Paoli and Friese et al. contest the philosophical premises, Andrews et al. contest the empirical claim: the evidence shows AI-assisted researchers produce better work, not worse.

This argument works better in the IB context than in the qualitative context. In quantitative IB research, “better work” has more standardized operationalizations (citation counts, journal impact, replication rates). In qualitative research, “better” is methodologically contested in ways that make the Dell’Acqua and Filimonovic evidence simply non-applicable.

Epistemological Stance

Implicitly post-positivist and pragmatist. The evaluation criteria throughout are efficiency, quality (quantitatively measured), and competitive standing. There is no engagement with interpretivist epistemological commitments — the question of whether AI-assisted research produces valid findings in the qualitative sense is simply not the paper’s concern. The editors are not dismissing interpretivist concerns; they’re writing in a tradition where those concerns are not operative.

Limitations

  • Disciplinary specificity: the argument is calibrated for quantitative, large-N, journal-impact-driven research. Direct application to qualitative research requires unstated assumptions that the paper doesn’t address.
  • The Dell’Acqua et al. (2023) study is from management consulting, not academic research. Generalizing from consultant task performance to scholarly knowledge production is a significant leap.
  • “AI shaming” as a concept is useful, but the paper doesn’t distinguish between AI shaming as suppression of legitimate disclosure and institutional insistence on rigor as appropriate professional gatekeeping. Some editorial resistance to AI is not shaming; it is methodological standard-setting.
  • The market-based argument (“adapt or become marginal”) doesn’t engage with whether adaptation preserves or undermines the distinctive value of the disciplines in question. A discipline that adapts to market pressures by abandoning its epistemological commitments may not have “progressed.”

Connections

  • dellafiore-et-al-2025-expert-interviews — “culture of concealment” finding directly parallels the “AI shaming” concept; the dynamics are documented in the qualitative methods context.
  • jowsey-et-al-2025-we-reject — the Jowsey letter represents exactly the kind of categorical restriction that Andrews et al. argue against, from the other disciplinary direction.
  • contested-claims — introduces a new dimension: not just “should AI be used in qualitative research?” but “are field-wide adoption debates producing cross-disciplinary dynamics (AI shaming, concealment cultures) that the qualitative methods literature should track?”
  • ai-research-ethics — AI shaming as an ethics concept; disclosure norms and concealment incentives are ethics terrain.
  • empirical-findings — Dell’Acqua and Filimonovic findings add to the empirical landscape, with the caveats about disciplinary transferability.
  • chatzichristos-ai-positivism-2025 — generational divide in AI adoption documented in qualitative context; Andrews et al. provide the business-side complement.