Units 4-6 - Collaborative Discussion 2

The Risks and Benefits of AI Writers

This three-week collaborative discussion explores the impact of Large Language Models (LLMs) on writing and communication, focusing on their benefits and risks across different contexts. Based on recent research in AI and language models, the discussion examines real-world applications and challenges, incorporating peer feedback and insights from Units 4-6 course materials.

Initial Post

Large Language Models (LLMs) like GPT-3 have become wildly popular for their human-like writing ability and they're now used for everything from business memos to creative fiction (Hutson, 2021), however it comes at a price of both risks and benefits.

In low-risk administrative tasks, LLMs are mostly beneficial with minimal downside, they can draft a polite email or a meeting summary in seconds (Hutson, 2021), saving time and helping with writer's block. Any minor errors or awkward phrasing are easy for a human to catch and fix, so the overall risk stays low.

In medium-risk professional writing (marketing, journalism, etc.), LLMs offer speed but need caution. They can generate a solid press release or news blurb in a flash, yet over-reliance can backfire. For example, when the tech site CNET used an AI to write finance articles, it later issued corrections for numerous factual errors (Farhi, 2023). Here the benefit of a quick drafts is balanced by the risk of inaccuracies or bland, boilerplate prose. Human editors must stay in the loop to fact-check and refine the output.

For higher-risk creative writing such as fiction, poetry, and the like the stakes are highest. LLMs can mimic an author's style and pour out pages of text, but their originality suffers. The AI's stories often feel derivative because it's remixing past works rather than truly inventing (Wenger and Kenett, 2023). This also raises plagiarism worries; a model might inadvertently paste chunks of its training data (Tang, 2023). Moreover, because it's so easy to generate text, there's a risk of overproduction a flood of formulaic AI content that could overwhelm readers and devalue genuine creativity. In these nuanced tasks, LLMs make intriguing collaborators, but they can't replace the originality and judgment of a human writer.

Another layer of risk when using LLMs comes from their tendency to be agreeable but unreliable (Goetz et al., 2023). These models are built to please the user, not to challenge them. That might be fine when drafting an email, but in something like ethics education, where students are supposed to develop critical thinking, it's a real problem. Instead of encouraging students to think deeply, LLMs might just echo back the kind of answer the user wants to hear.

As Coeckelbergh (2025) points out, these models are optimized to produce plausible-sounding text, not necessarily true statements. That means even when they're wrong, they sound right. This is dangerous, especially in political or educational settings, where people may trust the output without questioning it as it happens often with other phenomenas like fake news.

References

  • Coeckelbergh, M. (2025) Truth, post-truth, and democracy in the age of artificial intelligence: Epistemological and political risks of language models. AI and Society. Available from: https://doi.org/10.1007/s00148-025-00529-0 [Accessed 7 April 2025].
  • Farhi, P. (2023) A news site used AI to write articles. It was a journalistic disaster. The Washington Post, 17 January. Available from: https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/ [Accessed 5 April 2025].
  • Goetz, L., Trengove, M., Trotsyuk, A. and Federico, C.A. (2023) Unreliable LLM bioethics assistants: Ethical and pedagogical risks. The American Journal of Bioethics, 23(10), pp.89–91. Available from: https://doi.org/10.1080/15265161.2023.2249843 [Accessed 7 April 2025].
  • Hutson, M. (2021) Robo-writers: the rise and risks of language-generating AI. Nature 591(7848): 22–25.
  • Tang, B. L. (2023) The underappreciated wrong of AIgiarism – bypass plagiarism that risks propagation of erroneous and bias content. EXCLI Journal 22: 907–910. DOI: 10.17179/excli2023-6435.
  • Wenger, E. and Kenett, Y. (2023) We're Different, We're the Same: Creative Homogeneity Across LLMs. arXiv preprint arXiv:2304.00008.

Discussion Summary

Large Language Models (LLMs) like GPT-3 have transformed the writing landscape, offering fluent and efficient text generation. My initial post highlighted how their impact varies depending on the context—from helpful drafting tools in low-risk scenarios to potentially problematic agents in high-risk domains such as journalism or creative writing.

Peers expanded on these ideas with insightful contributions. One peer warned about the erosion of users' own communication skills due to over-reliance on AI for routine tasks (Hutson, 2021). Another highlighted the political and ethical dangers of AI-generated content influencing public opinion, as LLMs can sound convincingly accurate even when they are factually wrong (Coeckelbergh, 2025; Goetz et al., 2023). This supports the need for careful supervision and responsible deployment.

A useful framework was also introduced: using AI either as a drafting assistant or as a feedback tool (Kapuściński, 2025; Thornburn, 2024). This approach allows humans to remain in control while still benefiting from AI's efficiency. The dual-use strategy reflects the broader theme across the posts—that AI is a collaborator, not a replacement.

Across all contributions, the consensus is clear: while LLMs offer undeniable advantages in terms of speed and accessibility, they pose risks related to reliability, originality, and ethical integrity. Their "agreeable" tone (Goetz et al., 2023) can mask inaccuracies and reinforce confirmation bias. Used without critical thinking, they may undermine the very skills they aim to support.

References

  • Coeckelbergh, M. (2025) Truth, post-truth, and democracy in the age of artificial intelligence: Epistemological and political risks of language models. AI and Society. Available from: https://doi.org/10.1007/s00148-025-00529-0 [Accessed 7 April 2025].
  • Goetz, L., Trengove, M., Trotsyuk, A. and Federico, C.A. (2023) Unreliable LLM bioethics assistants: Ethical and pedagogical risks. The American Journal of Bioethics, 23(10), pp.89–91. Available from: https://doi.org/10.1080/15265161.2023.2249843 [Accessed 7 April 2025].
  • Hutson, M. (2021) Robo-writers: the rise and risks of language-generating AI. Nature, 591(7848), pp.22–25.
  • Kapuściński, M. (2025) ChatGPT 4.5 – What's New? Practical Examples and Applications. TTMS. Available from: https://ttms.com/chatgpt-whats-new [Accessed 17 April 2025].
  • Thornburn, R. (2024) How to use AI to Generate Student Feedback. English for Asia. Available from: https://hongkongtesol.com/ai-feedback [Accessed 17 April 2025].

Reflection

The discussion on AI writers and their impact across different domains has been particularly enlightening. The initial analysis of risk levels in different writing contexts, from administrative to creative work, helped frame the broader implications of LLM technology. Peer responses enriched this perspective by highlighting specific concerns about skill erosion and the subtle ways AI can influence public discourse.

What struck me most was the parallel between the discussion topic and our own use of AI in education. As we debated the risks of AI writers potentially hampering critical thinking in students, we ourselves demonstrated the importance of human-led critical analysis and peer review. The framework suggested by peers—using AI as either a drafting assistant or feedback tool—resonated with my own evolving understanding of how to balance AI's capabilities with human oversight.

This discussion reinforced that while AI can be an incredibly powerful tool for augmenting human capabilities, its role should be carefully considered within each context. The key is not to avoid AI tools entirely, but to develop frameworks for their responsible use that preserve and enhance human agency rather than diminish it.

Email
GitHub
LinkedIn