A student just cited the magic toaster as a source in her essay.
Explanation: I am a visual thinker; AI is nebulous, so I envision it as a magic toaster. It’s not a great toaster, as it routinely makes shit up (aka hallucinating), cites things published in predatory journals, etc. It’s also a terrible writer.
I spend a lot of time telling my students why I don’t want them to use it. I have no interest in what the toaster thinks, and I shouldn’t spend the too-little time I have on this planet commenting on its nonsense. I also ask them what will happen when I give AI a bad grade: will they go to the toaster and explain that I hated that introduction?
Despite this, I’ve seen an explosion of AI use, and now I’m spending more time turning people in to SJA than in trying to warn them.
I have an online freshman course at SCC, and it’s been most destructive there. One homework assignment was to gear us up for an analysis of a film of their choosing: the students will argue whether the film ultimately upholds traditional gender roles and stereotypes or subverts them.
The assignment asked for a one-paragraph summary of the film and a one-paragraph explanation of why it would be a good fit for this assignment.
1/3rd of the students had AI write those paragraphs. How could I tell? The paragraphs didn’t sound like any other writing the students had done, and they all sounded the same. Each assignment ended, for example, with AI saying the film would make a “nuanced case study …”
None of the students denied using AI. And none of them apologized for it.
Several of them later turned in drafts written by AI; I wrote them all notes about how they were going to fail the assignment. I also told them all I would be running each essay through an AI detector, and stressed that AI should not be used on this essay, other than for grammar/spelling.
As I was glancing through the essays yesterday morning, my heart dropped. Students were required to use three secondary sources. One had AI as her third source. “According to AI, Moana is a movie about . . .” The Works Cited page entry was “AI. Google.”
I emailed the student, who said she remembered me saying they could use AI if they cited it.
Here’s what the syllabus says: “… You may use Grammarly and other editing programs to identify and fix typos, spelling errors, punctuation, and sentence errors. You may not use these editors to add new words, sentences, or ideas. I’m fine with you using [AI] to brainstorm and to edit/proofread (as long as you cite and talk about it in the memo). What’s not okay: letting AI write a draft for you. If you can point to sentence in your paper and say, ‘AI wrote that part,’ then something’s wrong.”
I have also done extensive source work with the students, going over reliable sources and how to find them. I have stressed that AI is unreliable, and I have forbidden students to use sources with no authors and cheat sites. AI, in this context, is a combo of both.
AI isn’t an expert on Moana, I explained to the student; it hasn’t actually seen the film.
My last message to the students before the paper was due said: “Don’t use one of the four forbidden sources. Don’t use AI.”
The only comfort I have is reminding myself that the student doesn’t watch most of the videos and didn’t do all of the homework on sources, but I still feel like a new line has been crossed.
Being an ignored authority figure looks frustratingly like the worst of two worlds. Your often thankless task of preparing folks for an AI-centric future sounds really frustrating. But your genuine passion for your work and for your students shines through the BS and provides hope. Thank you for doing what you do!
A small tangent: Some people dislike Google’s tendency to put AI generated content first among the list of search results. One of those people today told me this AI result can be excluded by including the term “-AI” in the search. This technique won’t help prevent academic dishonesty but it might bring some small satisfaction here and there.