Cheating has always been a problem in higher education, but ChatGPT has caused it to metastasize.
The Chronicle of Higher Education reports that the percentage of students at one college who admit cheating has jumped “from 35 percent in 2019 to 65 percent in 2024.” This school is not an outlier.
Teachers can see how bad it’s gotten. One professor emailed a student caught using ChatGPT to write a paper to warn that she would fail the course if she did it again. The student replied with a heartfelt apology but soon did it again. It turned out that the apology itself had been spewed by ChatGPT.
How to combat the trend?
There are many ways if one is serious about it. Detecting prose that is ChatGPT-spawned is usually not hard. But if students suffer no real costs for cheating, as is often the case, cheating will only remain routine.
“Researchers have long documented that many students cheat at some point in their educational career,” the author of the Chronicle article explains, “and that their motivations are situational rather than character based.”
Talk of motivations is off-point. Students’ actions are “situational”-based in terms of incentives. Students come in a wide range of character, I hazard, each individual’s integrity built up by a long string of past decisions, which were, undoubtedly, influenced by incentives. When strict honesty is not taught and rewarded, and gross dishonesty not condemned and punished (with bad grades or expulsion), then even students with strong character will be tempted to cheat, and weaker students will cheat.
This is Common Sense. I’m Paul Jacob.
Illustration created with Midjourney
—
See all recent commentary
(simplified and organized)
3 replies on “A Cheating Culture”
(I’m tempted to have ChatGPT compose a comment. However, so far I have chosen not launch any of the LLMs in any context.)
I want to suggest that a long-standing part of the problem here is that many people have no sense of why some behaviors are and should be rewarded, why others shouldn’t be and aren’t. These people imagine something like luxury communism, fully automated but dressed-up to look as if they are working. In the context of implicitly believing that mind-work can be done almost mindlessly, rules that impose mindfulness seem inappropriate to them.
Of course, people whose only ability is to plagiarize are not scarce.
“Detecting prose that is ChatGPT-spawned is usually not hard.”
If this is true, it won’t remain so for long, as AI is rapidly improving. But what’s the problem? If you want to ensure that students write their own essays, just sit them in a room, give them a subject to write about, and let them work on it without access to AI.
Transforming such homework into classwork examinations has multiple problems, associated with imposing restrictions that are unrelated to what homework would develop and test. Students who could not schedule just that interval would be excluded; students who do their best work in separated bursts would be punished; students who do their best work in environments such as cafés would be punished; students who do their best work in dark isolation would be punished; essays that required fetching external resources would be excluded.
I never submitted the work of someone or something else as my own; but the system that you propose would have clobbered me.