因为 ChatGPT,我放弃了教学
This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT.
Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.
In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.
As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.
In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text.
My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing.
Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research.
Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.
The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me to distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove.
As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students.
So I quit.
The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.
However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.
这是我近20年来第一次没有回到课堂。我的职业生涯大部分时间都在教授写作、文学和语言,主要是面向大学生。我辞职的主要原因之一是大语言模型(LLMs)的发展,比如ChatGPT。
几乎所有有经验的学者都知道,正如历史学家Lynn Hunt所说,写作并不是简单地把脑海中的思想记录下来,而是一个与思考紧密相连的过程。在读研时,我花了几个月尝试在脑海中组织我的论文,最终发现只有通过写作才能解决这些难题。写作是一项艰辛的工作,有时甚至让人感到畏惧。面对AI的轻松诱惑,很多学生,甚至可能是大多数人,不再愿意克服这种不适感。
在我最近的岗位中,我教授技术学院的博士生学术写作。许多研究生是计算机科学家,他们对生成式AI的理解比我还要深入。他们知道LLMs作为研究工具是不可靠的,因为它们会产生虚假的引用和内容。他们也意识到这项科技的环境影响和伦理问题,并明白模型是基于已有数据训练的,因而无法产生全新的研究。然而,这些认识并没有阻止他们大规模依赖生成式AI。有些学生坦言,他们先写下研究笔记,再用ChatGPT撰写论文。
作为一名有经验的教师,我通过构建作业框架来帮助学生。我研究了如何在课程中合并生成式AI,并设计活动以揭示其局限性。我告诉学生,当要求ChatGPT修改时,可能会改变文本的意思,产生偏见并提供不准确的信息,写作风格也并不出色,此外,对于追求好成绩的学生,它不能保证A级的结果。但这些对他们来说似乎无关紧要,学生们仍然使用了AI。
在一次课堂活动中,学生们在课堂上写了一段文字,然后让ChatGPT进行修订,并与原作进行比较。但这种比较分析没有成功,因为大多数学生尚未具备足够的写作能力,无法分析意义的细微差异或评估风格。一位博士生甚至反驳道:“它让我的写作看起来更高级。”
我的学生也大量使用像Quillbot这样的AI改写工具。准确改写和起草原创研究一样,是加深理解的重要步骤。最近,一些高调的“重复性语言”事件提醒我们,改写也是一项艰苦的工作。因此,很多学生倾向于使用AI改写工具也就不足为奇了。然而,这些技术常导致不一致的写作风格,可能无法帮助学生避免抄袭,并容易掩盖对内容的理解。在线改写工具只有在学生已深谙写作艺术的情况下才有用。
依赖AI写作的学生失去了深入思考研究的机会。在一篇关于艺术和生成式AI的文章中,作者Ted Chiang指出:“使用ChatGPT来完成作业就像在健身房使用叉车——这种方式不会提高你的认知能力。” Chiang还指出,作为作家,我们所做的数百个细小选择和最初的构思一样重要。虽然Chiang是一位小说作家,但这个道理同样适用于学术写作。句法、词汇和风格上的决策几乎和基础研究一样为文本赋予了意义。
从某种程度上说,生成式AI是一种民主化工具。我的许多学生是非英语母语者,他们的写作常常有语法错误。生成式AI能有效纠正这些语法问题。不过,即使仅仅要求“修正语法”,这项技术也时常改变词汇,从而影响含义。我的学生缺乏发现和改正这种细微变化的技能。我无法说服他们风格的一致性和形成研究者声音的重要性。
挑战不在于识别AI生成或修改的文本。每学期初,我都会要求学生在课堂上写作。通过与这个基线样本进行比较,我很容易区分学生的写作与ChatGPT生成的文本。我还熟知那些检测AI生成的工具,虽然他们声称可以检测,但实际上存在缺陷。AI辅助的写作容易识别,却难以证明。
因此,我花了很多时间评阅我清楚是由AI生成的写作。我指出了论点不当之处,以及一些ChatGPT常见的风格怪癖,例如突然增加的“深入探讨”等短语。总之,我发现在AI上花更多时间提供反馈,而不是学生。
所以我选择了辞职。
优秀的教育工作者会适应AI。某些方面的变化是积极的。教师将不再局限于机械性的活动或简单总结,而是找到激励学生进行批判性思考的方法,让他们明白写作是一种产生想法、揭示矛盾和阐明方法的途径。
然而,这些学习需要学生愿意接受由于知识不足带来的短暂不适。在写作和修订中,他们必须学会相信自己的认知能力。除了个别例外,大多数学生不愿深入这些不适应的领域,或不愿在那里逗留足够长的时间,以发现写作的启示力量。
往期推荐
亮点词助力21分(creep, consume...) |读后续写高分表达04
【丰富话题素材】Xueke网10月大联考应用文—优秀应用文表达02
网站推荐 — Buzzling.cc:全球热门新闻双语速览,免费,无需注册!
24-25学年优秀应用文表达01
阅读题分享(2024.10强基联盟高三10月)——穿越时光的铁轨:探秘英国蒸汽遗产铁路的魅力与挑战
读后续写21+ 句子训练02(使用亮点动词 sparkle, overcome, well )
读后续写21+ 句子高分训练03 (亮点词creep, envelop, loom, linger)
10篇学生手写体应用文批量搞定 ! 作文批改效率暴增100%宝典
公众号平台更改了推送规则。 如果你不想错过内容,记得点下“赞”和“在看”,这样,每次新文章推送,就会第一时间出现在你的订阅号列表里了~~否则有可能错过内容推送。您就不能及时看到我的文章啦!