3 Powerful Strategies (Other Than AI Detectors) That Teachers Can Adopt to Adapt to Generative AI
Prepare for the upcoming school year by 1. taking a full-value attitude, 2. dodging problematic effects of AI, or 3. embracing AI-enabled teaching
The first strategy is taking the right mindset toward what many teachers see as an avalanche of problems enhanced, and even created, by generative AI. Not an action in itself but an attitude. Whether teachers see AI as a curse or a blessing, a mental shift is required solely because a new element has entered the equation.
The other two strategies, although opposite in nature, require this attitude shift as a prerequisite. Whatever teachers choose to do—defend against AI or embrace its promises—acceptance that something big is happening is a must.
The second strategy is about dodging the effects of AI through small targeted adaptive changes. Teachers who believe current teaching methods must prevail or are unwilling to accept new ones may take their place—which is a perfectly licit stance—can benefit from changes that are small enough so that they don't entail an overhaul of the educational status quo but adaptive enough so that they stop AI from turning the classroom upside down.
The third strategy is about embracing; about engagement. Teachers that see themselves as pioneers and early adopters of the new thing, be it the personal computer in the 80s, the internet in the 90s, or generative AI today, can approach the latter as an enabler of new teaching—and learning—modes.
(This article is the second part of a two-part article on generative AI and education. In the first part, I answered two questions: why teachers shouldn’t use AI detectors and why they don’t work.)
Here are the three strategies and how teachers can adopt them.
1. The power of a full-value attitude
AI is, like calculators before, and for better or worse, a disruptor of education.
This is bad only if teachers remain attached to current ways of teaching. If they accept that some, yet indeterminate, changes will be mandatory in the short-term future due to AI, then a better approach is, instead of changing as little as possible from what exists today, making as many changes as necessary for AI to be useful rather than a hindrance.
The importance of this mental shift is that it applies just as well to teachers who despise generative AI as to those who think it can’t enter the classroom fast enough. This new thing—maybe bad, maybe good—is coming to change the status quo, so let’s flip it in its head so that, directly or not, it inevitably becomes a good thing.
For example, in-person essay writing is an important deviation from how things are done now, but if we accept that take-home essays are done, maybe it’s the best chance at making what appears to be a bad thing (no more home-written essays) into a good thing: By writing essays in person, students would get deeper, richer feedback throughout the process: ideation, outline, drafting, and finished piece. It also benefits the teacher, who would now be able to find out where the student’s weaknesses lie, on top of preventing the problems that come from doing home essays, like ChatGPT-enhanced cheating.
These hidden benefits that emerge under the new scenario, unavailable with current teaching methods, are what a full-value attitude provides. The only questions you need to ask are these: What can I get out of this new situation that I couldn’t before? Given that I’m forced by external circumstances to do it this way, what’s the route I should take to not only compensate for the loss but create a net win?
That’s the basis of a full-value mindset: get the most out of situations you didn’t want, didn’t expect, and wouldn’t freely take if you could that you have no choice but to accept.
Once teachers adopt this attitude, they need to decide whether they will fight or embrace AI. I won’t enter to judge which approach is better but will instead provide a valuable strategy for each case.
2. Dodging AI’s effects with small, targeted, adaptive changes
Teachers can’t really fight AI except by urging regulation (which is a valid position but requires more power than what anyone teacher has individually, so it’s out of the scope of this article). And they can’t really defend themselves from AI because detectors don’t work.
The only chance at stopping AI from turning their world upside down (that doesn’t entail embracing it) is avoiding the effects of its existence, i.e., changing their teaching methods just enough for them to take a shape that perfectly dodges AI’s bullets.
For instance, not using AI detectors is necessary because it's the only way for teachers to be sure they won’t ever falsely accuse innocent students, but there’s an effect that makes this approach insufficient: AI cheaters will surpass honest students in a competition (i.e., the education system) that way too often takes the form of zero-sum games.
Let’s say a teacher asks their students to do a writing task without using AI. One student cheats and, because the teacher read the first part of my article and decided to not use detectors, gets an A+. Another, who did the task as intended, gets a B-. Both works are assumed to be fairly produced in the absence of a detector, which benefits the cheating above the honest one. Sure, the learning progress of the B- student will be superior because they’re actually doing the task, but the real world isn’t fair. If the cheater graduates with a better grade, regardless of their cheating and lack of real expertise, they will have better opportunities, and possibly prospects, to find a spot at a top university or company.
If a teacher wants to prevent this from happening but is unwilling to embrace and allow AI in the classroom, small, targeted, and customized changes are the answer.
One possibility that works for the example I described in the first section—that admittedly requires more work for the teacher—is changing the grading procedures.
Let’s say that instead of grading only the finished essay, the teacher decides to grade every step of the process (which is already taking place under the full-value attitude strategy). As an idea, every student could get a mark for ideation, the outline, the first and second drafts, and the finished piece. And also for explaining every one of those stages of writing a piece: Only a student who did the work will be able to defend every decision to the level of words, sentences, and paragraphs; structure and style; what was included and what was left out; etc.
This is just an example. The essence of this strategy is to implement changes that would solve the most pressing effects that generative AI will bring without having to resort to systemic regulation, AI detectors, a complete overhaul of the education system, or grudgingly accepting that AI is an inevitability.
3. Embracing AI-enabled teaching and learning modes
But some teachers will happily accept the inevitability of AI. Those pioneers and early adopters will possibly establish the new normal. This is, without taking sides, as mere subjective observation, the only strategy that I think will remain effective in the long term.
By integrating AI writing tools in the classroom, students can learn to use a tech that may prove fundamental for learning in the future and they won’t be able to cheat (an effective method used today that resembles this approach are everything-is-allowed tests, famous for being frighteningly hard and uncheatable—using AI positively would allow for a similar anti-cheating mechanism).
Ethan Mollick is probably the best-known, if not the clearest example of this approach. He’s sometimes too enthusiastic and overoptimistic for my taste but his willingness to embrace AI as a learning tool rather than fight it or resist it is a model I’d look up to if I was a teacher. Just like I would have loved it if my teachers had encouraged me to capitalize on the infinite value of the internet from very early on.
Right after ChatGPT was released, Mollick co-authored a paper entitled “New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments.” He writes:
“Chatbots are able to produce high-quality, sophisticated text in natural language. The authors of this paper believe that AI can be used to overcome three barriers to learning in the classroom: improving transfer [how to apply a skill learned in class, outside the class], breaking the illusion of explanatory depth [how to not confuse deep for shallow understanding], and training students to critically evaluate explanations [how to learn by teaching].”
They then provide information and techniques that illustrate how AI can help with these problems. They include prompts and assignments. “The goal is to help teachers use the capabilities and drawbacks of AI to improve learning.”
This approach, which clearly emerges from a full-value mentality and thinking that AI is a good thing, is a powerful way to prevent misuse by positive reinforcement, and more importantly, prepare students for a future that is likely to come sooner than later.
Addendum: The trap of AI detectors
In case I failed to convince you and you still plan to use AI detectors, let me do a brief recap here on the first part of the article about why they are neither a valid nor a valuable strategy to adopt, but merely a non-strategy to avoid.
On the one hand, the appearance of reliability and infallibility of AI detectors can be counter-productive; teachers everywhere may be willing to take the chance with detectors to avoid being tricked, under the illusion that they’re almost as reliable as the old Turnitin was. They produce many false positives because AI-generated text is fundamentally indistinguishable from human output (even if in most cases there are clues that give away the provenance of the text).
On the other hand, teachers should prioritize not punishing innocent students above catching a cheater. One honest, eager student is a treasure worth caring for infinitely more than it is worth trying to prevent hundreds of indifferent ones from cheating with AI (or otherwise). If a teacher has to risk unjustly stripping a student of their integrity, dedication, and hard work in a surely unfruitful quest to catch AI cheaters, let me tell you—it is not worth it.
Teachers: Do it for your students. It’s a hard blow to go through this as an honest student willing to do things the right way while being surrounded by appealing incentives to not do so. And do it for yourself. This is a hot topic—if you falsely accuse a student, don’t be surprised if the events are noticed by an important news outlet and it escalates to the point of threatening your job status.
Anyway, I hope the three strategies I laid out above already persuaded you from turning to AI detectors as a tool to save you from the storm awaiting at the other end of summer. Even if you don’t like AI and don’t want to embrace it as Mollick does, small protective changes here and there—despite entailing more work on your part—are way better than ever relying on a faulty and flawed detector. As author Janelle Shane says, “don’t use AI detectors for anything important,” and I dare say this is very important.
Fantastic, thanks for another great post, Alberto.
One piece that teachers and administrators NEED before the 2023-2024 academic year is clear guidelines from style gurus at APA, MLA, Chicago, etc about how to cite/attribute AI-generated work. There are some initial blog posts on the topic, but we need definitive guidelines so we can approach GenAI work with full transparency: "so you used, GenAI, this is how you cite it".
An interesting argument in this arena is what is being cited -- there is no 'person' to attribute a ChatGPT response to, and no way for a reader to go and find and check the response even if a citation is given. Attributing text to the LLM seems reasonable, but there is little precedent for attributing original work to a non-human entity.
As a baseline, one compelling idea is to start requiring student essays to include appendices of GenAI prompts used and the text responses received back.
Would love your thoughts on this in a future post, Alberto. And if anyone in the reading community has working attribution guidelines that students could use in essays, please share.
One of the most exhausting parts of chatbot enabled or produced papers will be looking for all the hallucinated/delusional claims. I will have to get out the text and find all the quotes that are fabricated, etc. if I want to show the chatbot produced the paper (or even fairly grade the paper since just throwing in some BS like ‘on page 57 the author writes...’ and it’ll be all fabricated.
Almost every encounter I have with chatbot to see what it can do with my research area is peppered with complete fabrications. So it’s easy to ‘detect’ but will be exhausting as I don’t generally factcheck every citation, and read for content. Now I have to be looking at the text and seeing ‘is that quote in there?’ Usually I will be able to tell but...
So I am trying to think of a totally different way to teach. Not so much that I am obsessed with students cheating or catching them --but this is going to drive me crazy.
I can’t tell you how depressing it is for me when students plagiarize...it just crushes me somehow. Like YOU COULD HAVE WRITTEN IT! Why????’ This is like offering candy laced with heroine to a certain kind of student. Also, it creates a horrible narrative where the student might think ‘why do I have to learn how to write now? Machines will do this for me’ without realizing that the point of ‘learning to write’ is ‘learning to think’ and machines don’t do THAT for you, one hopes. And if they DO do that for us, then what is the point of us?
I will figure out a way to do it. If the class is one of those collaborative classes and the vibe is right, I will get them to help me figure it out, and discuss my strategy for chatbot avoidance with them.