Think Again: Why AI Might Save Academic Writing - A Student’s Counter‑Intuitive Take on the Boston Globe’s Alarm
— 4 min read
Most people believe AI is destroying good writing. They are wrong.
When the Boston Globe ran the headline “AI is destroying good writing,” the reaction was instant: alarm bells, angry tweets, and a chorus of professors warning their classes to ditch the new tools. The op-ed paints a picture of algorithms churning out flawless sentences while human authors become obsolete. But the story stops at the surface. As a former startup founder turned storyteller, I’ve seen technology strip the mundane from creative work, leaving space for higher-order thinking. For students and researchers, the real danger isn’t AI itself - it’s the myth that AI can’t coexist with rigorous scholarship.
In the Globe piece, the author argues that “machines can generate pages of prose in seconds, eroding the discipline required to craft a thoughtful argument.” That claim feels dramatic, yet it sidesteps a crucial question: what does “good writing” actually mean in academia? Is it the elegant turn of phrase, or is it the ability to synthesize evidence, argue persuasively, and push knowledge forward? If we define good writing by its purpose, AI might be less a villain and more a catalyst.
Academic Writing Is About Argument, Not Ornament
University syllabi often stress style - the Oxford comma, the perfect thesis statement, the flawless citation format. Those elements are important, but they are scaffolding for the core activity: building an argument that advances a field. The Boston Globe’s alarm conflates the loss of ornamental prose with the loss of intellectual rigor. In reality, AI excels at the former and can even assist with the latter.
Consider a literature review. A graduate student can spend weeks combing through databases, extracting abstracts, and noting trends. An AI model, trained on millions of scholarly articles, can instantly surface the most cited works, map citation networks, and highlight gaps. The student still decides which gap to fill and how to frame the contribution. The AI does not replace the critical judgment; it removes the mechanical drudgery that often stalls progress.
When we judge writing solely by its surface polish, we ignore the real metric of academic success: the novelty and robustness of the claim. AI tools that suggest phrasing or tighten syntax free up mental bandwidth for deeper analysis. In that sense, the “destruction” the Globe warns about is a mischaracterization - the technology is merely reallocating effort from copying to creating.
AI as a Research Ally: From Drafts to Data
Students frequently ask, “Can I use ChatGPT for my thesis?” The answer is nuanced, but the binary fear-mongering in the Globe article obscures the practical benefits. AI can act as a first-draft generator, a language coach, and even a data-synthesis partner. When a non-native English speaker drafts a methodology section, an AI can flag ambiguous phrasing, suggest clearer alternatives, and ensure consistency across sections. The researcher still owns the content; the AI is a sophisticated editor.
Practical tip for students: Use AI to produce a 200-word summary of each source you read. Then, compare the AI’s synthesis with your notes. The gaps you discover become the focus of your critical analysis.
"AI can produce a 500-word essay in under a minute," the Boston Globe notes, underscoring the speed advantage that, when harnessed responsibly, can accelerate the research cycle.
Speed alone does not equal quality, but speed can reduce the time spent on repetitive tasks, allowing scholars to devote more hours to hypothesis testing, peer discussions, and experimental design - the true engines of discovery.
Beyond the ‘Destruction’ Narrative: Hidden Bias and Ethical Blind Spots
The Globe article’s focus on “destruction” diverts attention from a more pressing issue: bias embedded in the models themselves. If students rely on AI to suggest phrasing, they may inadvertently adopt the language patterns of the training data, which often reflect dominant cultural norms. This can homogenize academic voices, silencing minority perspectives.
However, recognizing bias does not mean rejecting AI outright. It means integrating bias-awareness into the workflow. For example, when an AI suggests a citation, the student should verify the source’s diversity of authorship and methodological rigor. By treating AI as a partner that requires scrutiny, scholars develop a habit of critical evaluation that extends to all research tools. Pegasus in the Sky: How Digital Deception Saved...
Thus, the conversation should shift from “AI destroys writing” to “how do we embed ethical AI use into scholarly norms?” This reframing opens space for policy development rather than fear-driven avoidance.
Pedagogical Implications: Teaching Students to Harness, Not Fear, AI
If educators cling to the notion that AI will erode writing skills, they risk depriving students of a valuable learning aid. Instead, curricula can incorporate AI as a teaching tool. Imagine a workshop where students submit a paragraph, receive AI feedback, and then rewrite based on that feedback. The iterative process reinforces the mechanics of style while sharpening analytical judgment. From Hollywood Lens to Spyware: The CIA’s Pegas...
Research shows that active learning - where students critique and improve AI suggestions - enhances retention of writing conventions. The Boston Globe’s op-ed overlooks this pedagogical upside, focusing solely on the loss of “craft.” In reality, the craft evolves. Just as word processors replaced typewriters, AI will replace manual editing. The skill set that matters now is the ability to evaluate and direct AI output.
Moreover, AI can democratize access to high-quality writing support. Students from under-resourced institutions often lack tutoring services. An AI assistant that offers instant grammar checks and citation formatting levels the playing field, allowing talent to shine regardless of background. The fear of “destroying good writing” becomes a fear of widening existing inequities - a concern that can be mitigated through thoughtful integration. Pegasus in Tehran: How CIA’s Spyware Deception ...
The Future of Scholarly Publishing: AI-Assisted Peer Review and Reproducibility
Beyond the author’s desk, AI is poised to reshape the entire publication ecosystem. Peer reviewers spend hours checking for plagiarism, statistical errors, and methodological consistency. AI can flag these issues early, allowing reviewers to focus on the novelty and significance of the work. The Globe’s alarm does not address this downstream benefit.
Critics argue that AI could “standardize” writing, making papers look alike. Yet standardization in format and clarity is a long-sought goal for scientific communication. The real challenge is preserving intellectual diversity while embracing efficiency. By establishing transparent AI-use policies, the academic community can reap the benefits of speed and consistency without sacrificing originality.
In short, the Boston Globe’s warning captures a moment of cultural anxiety but overlooks the systemic gains AI offers when paired with rigorous oversight. The uncomfortable truth for students and researchers is this: resisting AI out of nostalgia will leave them slower, more isolated, and less competitive in a world where the tools of tomorrow are already here.