Will we need to have the discipline to engage in 30 minutes of unassisted "thinking exercises" in the future, in the same way the invention of the engine and cars made it necessary for us to engage in 30 minutes of physical activity each day if we want to stay healthy and avoid muscle atrophy and weight gain?
That's a good analogy. Yes, I think that's likely. Although I kinda believe social media, video games, phones, etc. already force us to apply it! AI will increase the need for such discipline.
I have Crohn's disease, an incurable chronic health condition. Whilst it has been bleak, a silver outline is it has forced me to explore my awareness of myself, my experience, and my own life - in many difficult but meaningful ways. Similar to Patties comment, it's as if we need these awareness forcing functions that through necessity cause us to engage in deeper reflection. Maybe without the sheer difficulty of my experience I never would have woken up and realised I was merely existing and not living. This was a refreshing read Alberto, and one that swifty and precisely cuts to the heart of all of our experiences (or lack thereof).
I 100% agree. I don't think taking away the struggle would make us as happier as some people think. We need "awareness forcing functions" as you say (love that) to be alive. AI will help us directly in one sense (e.g. work faster), in another sense it's the catalyst for us to understand we have to gear into our lives with more intention, purpose, and determination. Hope you are well!!
I like this is theory. I rarely use AI to do anything but code and I have learned so much through this method that would have taken me far longer to learn on my own. True, if I had spent all this time learning the old fashioned way, or even the newer way of googling everything and reading books and articles, I would probably be more independent in a subset of the areas I’ve covered with AI, but what I have received in trade is an astonishing breadth that would have taken me an order of magnitude longer to achieve on my own. To be fair I’m an engineer with a 25 year career building a lot of systems so it could be argued that those neurons were already connected in me and I have been using AI to enhance those connections not create them in the first place.
Fascinating. The whole time until you got to it I was screaming inside, "what about moderation?" but of course that was your thesis all along, but applied to AI. Interestingly, I've been devouring Don Quixote for the first time, which is all about a guy who went mad from reading too much. In his case, too much of a particular genre, the stories of chivalric knights and their gallantry, until he found himself compelled to become their adventures himself.
I would say one should read much but across a great many types of literature, but in some ways that's exactly how AI trains. It might make you a good chatbot at parties, or a good go-to for advice for your friends and colleagues on a superficial level. My ADHD inclines me toward that path. Better would seem to be to read fairly deeply on a particular subject so that you become an expert system for responding to a particular type of prompt. More money in that at any rate.
Ultimately, I think you arrived at some very good advice. I've sometimes found myself thinking that AI is only valuable if I keep up with all the stuff that's written about it. That's why I subscribe to this newsletter! If I fall behind I'll just be like one of the great majority who knows just what they hear about it casually on social media or in the news, which is usually the worst of it. Perhaps I know and use it quite enough already, and I can get back to appreciating the miracle of consciousness while I have the colossal privilege of possessing it. OTOH - that miracle really makes me want to read more.
Great points. I’m especially challenged by “You can soothe your worries by having faith in people…, The fear that overwhelms you is not a testament to your humanitarian spirit but a sign of condescending distrust.”
Coupling your post here with my other favorite of yours on “solitude,” maybe we should seek out platforms that facilitate human-to-human discussion around specific texts, i.e. Book clubs, bible studies, Comic Con(?). Substack seems the best virtual alternative, but I believe we will be reading Ai generated text in the Comments section as well. But maybe reading with the intention to engage is the goal we are trying to elevate over learning for the sake of learning, so make-up of the panel matters less
"I believe we will be reading Ai generated text in the Comments section as well" I hope you're wrong but I know you're right. I just hope that we can keep truly human writing at the top - although I'm not even sure about that anymore!
This was a banger. A clear, certified reading-party slapper. However, I felt like it ended right before the drop. Hook hooked me like I was a hungry salmon. I ate it with ease. Chewing. Cheeeewing. I was ENGAGED. And then, man, the dream stopped. I would love to read more.
Regardless, it was refreshing to read this piece. I love how shamelessly you explained rhetorical devices. For some reason I studied rhetoric, and for some reason - I don’t remember sh*t.
Do you have any nice source to remind myself how to use them :) ?
Cheeeers and thank you for your effort to write down your thoughts. Inspired me.
Haha thank you!! I could have written more but I believe sometimes it's better to be short than overextend! I don't have any resources - but you can ask ChatGPT!
Thank you!! I guess the easy answer is practice. And thoughtfulness - be thoughtful about how you write, not only what or why you write. Improve your method, think what you can improve, what others do that you can't, etc.
I've read a few but I'm not sure they were that important really (e.g. the famous ones like "on writing well" and "reading like a writer"). I would read them anyway to have a minimum base knowledge. But in general just be observant and conscious about what you read - about how what you read is written. I've also watched plenty of interviews or read articles on famous writers talking about their craft, which helps to place things into a general writing framework. The most valuable, however, is a folder I have written by myself that's called "Writing Craft" where I have everything I've learned by myself from doing the writing. I urge you to have a similar "commonplace book" about writing
Alberto, your sentences glide like a skater's blade on ice. Precise, graceful, leaving clean, brilliant lines. What a joy to read such finely crafted writing.
One part of your comparison that doesn’t hold up: I don’t look down on rhapsodes, in fact I’m admiring what they did, as I’m currently reading a snapshot of their work, the Ὀδύσσεια. Nor do I look down on the rabbis of the pre-Talmudic oral tradition, nor the storytellers and lawgivers of indigenous peoples around the world. Nor on people who use Braille or audiobooks. Their brain is engaged in all cases.
I will not be around to care about 3,000 CE. But I can tell you here and now that my students using AI (LLMs, really) in my courses are not engaging their brains on learning the course material, but only on how to cut corners and how to achieve plausible deniability about their LLM use (at the latter of which, they suck). Of course, POSSIBLY some are using their liberated time to learn, say, quantum optics (spoiler alert: not), but then why enroll in my courses, which are all electives?
Writing seems to have started as a niche app, for things like accounting. Maybe LLMs can serve as one, too. But as for an everything-everywhere-all-at-once approach, why tell those who advocate precaution that they’re wrong? How the Hades can you or any other pundit KNOW? Why shouldn’t we consider this post just a sermon evangelizing a quasi-religion?
Can I ask how old are your students? I feel like my argument applies mostly for adults because students at a certain age don't really want to learn that much. I mean, I believe AI is an emphasizer of existing preferences more than anything. If you want to learn, you'll learn 10x. If you don't, you have one more way to cheat. I don't think it changes much the existing equilibrium of forces between one type and other of student
Thanks. They're in their 20s (upper-level undergrads). By no means all of my students use LLMs to pass off work as their own, but a growing minority each term since Fall 2022. However, for the past 7 years I've been teaching in an econ/ business faculty, and there has been a consistently higher base rate of cheating, even pre-LLM, than in my previous time in a law faculty. So maybe there is some sample bias.
I agree with you that there are students who are sincere about learning, and those who aren't. But these tendencies aren't digital, and I think there are some students who are kind of on the fence about yielding to temptation or not. What LLMs have done is to reduce the cost of crossing the fence, i.e. cheating. They achieve all the benefits of plagiarism without being as provable as plagiarism. So-called AI-detection software has too many false positives to be viable for academic discipline, if an instructor has a no-LLM policy on assignments.
My courses are in law, environmental ethics, technology & society, and what might be called a critical thinking course. So far, I haven't encountered any examples of course assignments that appear to have been enhanced in a good way by LLMs (apart from translation software, which is permitted: most of my students aren't native in English). Assignments I receive from my better students are almost always consistent with their classroom performance. OTOH, all students whom I've suspected of using LLMs for assignments, essays, etc. are unable to give an adequate explanation of what they wrote (much of which is wrong, due to hallucinations). So I'm not at all seeing the 10x-learning upside.
You mention your argument might be more applicable to adults: but there too I don't yet see the effects you propose. I speak as someone with fewer years ahead of me than behind me. E.g., I've seen enough errors every time I've used OpenAI or allowed my eyes to linger on a Google AI Summary to find that LLM use slows me down more than it speeds me up. (I'm agnostic about coding: I haven't yet used an LLM for that purpose.)
You may be right that we'll have this figured out by 3,000 CE, if some version of our civilization is still around at that time. But from a 2020's perspective, I think we may have gone a little too quickly with rolling this product out to the general public.
I think it's a profound mistake to frame any use of AI as "cheating" or "temptation" or "lack of sincerity". You can be an amazing student and a thorough learner using AI and reading books and mixing whatever it is that helps with your progress. The worst students - using AI or not - will be the worst students anyway. But otoh the best students are all using AI - one student who isn't will shortly fall behind because the value of stuff like Deep Research or even ChatGPT compounds over time (I'm of course not talking about letting them write for you or think for you, that's something to avoid). I use them all the time and I'm getting better at everything I do (including using AI). I'm simply very aware of finding balance between using AI and using my brain so I don't let it deteriorate.
Thanks. Apologies for the numbering, as there are several distinct issues involved:
1. Let me clarify, as regards student/teacher matters, I'm not concerned about all forms of AI, but about LLMs. I have some concerns about other forms of AI, especially other GenAIs, in a social sense, but that's not what we've been discussing. Certainly I'd never say all uses of AI are bad, e.g. discriminative AI.
2. As regarding student/teacher matters, there is an iceberg effect here (albeit not necessarily a 90-10 split). I only see the visible portion, i.e. work product that's submitted as assignments, exam answers, and presentations. And that's the main context in which I'm concerned with LLM usage. I of course can't attempt to limit what students use for research etc.
3. I haven't yet used LLM tools for research, but I don't see that being valuable other than for coming up with a list of references -- in essence Google with a more flexible query structure. From an LLM, I'd have to read each to make sure there aren't any hallucinations. Also, I'd have questions about whether the results reflect all, or the most important, references on a topic, whether they correctly classify the sources according to the positions they hold, etc. In short, I can't accept LLM output as reliable a priori, and I don't see how doing so can be justified, considering how LLMs work. OTOH, I'd have more faith in Google's output, because despite its limitations it has fewer hallucinations, if any, and therefore might require less time in the long run.
But as this relates to the submerged part of the iceberg, I'm more open-minded about this sort of use, esp. if LLM reliability is better than I expect.
4. Very little material in my courses is about imparting facts. Obviously there is some, such as basic vocabulary and definitions of concepts, etc. But mostly the course is about applying the concepts under conditions of extreme ambiguity: e.g., "What is the underlying ideology in this article from the Financial Times, and how is this illustrated by metaphors, narratives and framing used in the piece?," "Based on the facts above, how would you advise the legislature to act if they were to apply a biocentric deontological ethical point of view?", "Do the actions described above constitute a crime against humanity? Explain," etc.
It's possible, but in my view undesirable, to use an LLM to answer questions of this type. Also undesirable for students to prepare a script or slide pack for a presentation filled with words they can't define or conclusions they can't explain. Am I making a profound mistake in objecting to LLM use to generate these sorts of submitted work product?
5. I know some profs believe they should teach how to use AI in their courses, but I don't: I see my remit as getting students to, e.g., see for themselves why most treaties purporting to protect biodiversity fail, or understand the differences between categorical and propositional syllogisms, and the limitations of each. Am I on the wrong side of history here?
6. Finally, when I talk about "cheating," I mean subverting the fundamental purposes of the course, and/or being dishonest. Students using LLMs to replace thinking are doing the former, and students who deny using LLMs when clearly they have (as evidenced by wild hallucinatory stuff in their work product, false references, etc.) are doing the latter.
How to react to this is a separate matter: in practice, it means either grilling students whom I suspect of using LLMs about the substance of their work product, and giving them a lousy grade if they don't understand what they handed in, or just giving lousy work product a lousy grade. (Except that a false reference beyond a mere typo is a Fail.)
I understand that at most I can discourage LLM use in contexts where I believe it's inappropriate, not police it. Should I give up on that? Thanks again.
1. Discriminative AI can be worse than generative AI in many ways. The thing is that it's in fashion to go against the latter because it's newer. Anyway, when I say AI I use it to refer to whatever AI we're talking about. So LLMs in this case.
2. and 3. You approach this new technology with a deep bias, it's clear just from your wording "I don't see that being valuable other than..." You probably just don't know, which is fine, but then your attitude should be of open-mindedness and curiosity and awareness that things change fast and priors should be updated, etc. (I'm glad you mention that in the end.) I understand: there are plenty of people calling out the flaws in LLMs, etc. It's worth revisiting those biases before taking a public stance on it. Just for example, Google (which you seem to trust more) has a different kind of bias, which is how it ranks webpages. That should be of *more* not less concern because it's absolutely opaque - Google makes reality. In contrast, LLMs give you the sources and then you *go to Google to check*. But interestingly, that's not a problem because Google is already deeply ingrained in the way we do things.
4. I agree with you. I'd do the same here. Students shouldn't be prohibited from using LLMs but encouraged to find the use cases that enhance their learning rather than hinder it. If they don't care either way, then they probably were never good students to begin with and shouldn't be devoted much energy. We should focus on the people who truly want to participate of this society. And that requires, as a prerequisite, being willing to make the effort to learn how the world works.
5. Encouraging students to learn about new technology is a good thing but perhaps not your job. I guess it depends on what you teach. And how. I won't go into that.
6. I believe this is the crux of the problem. Should you punish cheaters? Do you have the ability to catch cheaters without punishing, by mistake, a honest student? Are students using this gap in detection systems to bypass your vigilance? It's not easy. I wouldn't like to be on your shoes. My take on this is that teachers and professors should try to devote their energy to the people who want to make progress as students, be it with whatever tools they use. I understand it's much easier said than done, but that's what I'd try to figure out. I think it's better to never give any attention to cheaters than to mistakenly punishing a honest student. Those should be cherished and protected and encouraged at all costs. The matter about grades is what makes it harder: should you give a good grade to someone you suspect of having cheated but couldn't catch? The answer is yes. Like you'd do normally, without AI. When I talk about "devoting energy" I talk about things other than grades. I imagine in practice this is harder than I'm making it seem here. I'd love to hear your take on whether this might work or it's too much of a simplification. Thank you for engaging (and sorry for the harsh tone in some of my responses haha)
I think we agree. Punishment is not my calling in regard to teaching, and I certainly don't want to punish an honest student. The easiest way to avoid doing that is just to give a lousy evaluation to lousy work product across the board.
Even when one is sure that a student has used AI to subvert the purpose of the course, there can be diminishing returns to coming out directly and confronting the student with this. In the pre-AI era, I did have a few cases where students were sharing essays, or plagiarizing stuff from the Internet. This could be traced by the propagation of errors and distinctive diction through several essays, or by doing an Internet search on unusually eloquent language coming from a student with limited English abilities. Usually these led to tearful confessions.
In the AI era, there's lower cost for a student to stonewall and deny, deny, deny. (As well as some famous role models for this behavior.) So the best I can do is to let the student know that I know they don't understand what they've handed in. Even if the quality of the work is OK on its face in an isolated sense, if the student can't show they know what they're talking about, that's a bad evaluation. And of course, made-up references are simple dishonesty, so that's a pretty fast road to failure.
On your more general remarks, certainly I didn't mean to give discriminative AI a clean bill of health for everything, but the dangers are more application-dependent: credit or benefits decisions, some medical applications, etc. can be problematic, finding lost cities in the jungle or identifying rare bird species in a morning chorus are usually less so.
Same is true with LLMs, of course: maybe coding and finding references are more accurate (agnostic about this) -- but for the kinds of skills and concepts I'm teaching, their use for generation of final work product isn't appropriate.
Apropos of using LLMs to write, there was an apt and wonderful quote from the late novelist Tom Robbins (passed away recently at age 93) in today's obituary in the Guardian: "[L]anguage is not the frosting, it's the cake."
While many employers don't share Robbins's view, college education shouldn't be solely about employment, it's about life. At least, that's how I was taught, and how I try to teach (we call ourselves a liberal arts university). It seems like a mistake to abandon Robbins's insight just because Sam Altman made a unilateral decision in November 2022. Thanks for engaging, too, and no worries about the tone.
I was thinking something similar. Take Bob Dylan as an example. extremely well-read, and turned the ideas he'd ingested into a great abundance of poetry set to music, and is now one of the most appreciated and listened to performers of all time. In music it's a truism that copying is bad but you have to steal. Our great ideas in science are built upon the foundations of all that came before. Should every scientist work from first principles and ignore the literature of their field? That way yields crackpots and con artists.
As with most things, a mix of abundant context and original thinking is the best way to progress in thought, art, literature, and science. Studying and utilizing AI should be no different.
Exactly. 100%. You should read a lot. You should think a lot. You should use AI a lot. You should also write yourself a lot (or do whatever it is that AI is doing for you). Otherwise you risk falling behind for not leveraging useful new stuff or deteriorating your inherent skills by relying too much on external tools.
Some questions about the "falling behind" narrative: Should artists give up painting in oils? Drawing with charcoal? Making woodcuts? Should they all use AI in their work? Should pianists rely on AI to tell them where to use pedal, rubato, and pianissimo? Should they stop using their fingers altogether?
A slogan from one of the great technological thinkers from the 1960s is, "The medium is the message." What you say will be changed by the tools you use. Maybe there are those who prefer expressing themselves using older tools and techniques. What's wrong with that?
And what is the underlying race in which one is "falling behind"? Why assume there is urgency and competition in everything?
The deeper I get into working with AI, the more I appreciate your writing! This post got me thinking about the people I grew up with (in the decidedly analog end of the 20th century) who actually dismissed the idea of reading books in general. “Get your head out of the books” was an actual saying. “If you’re so smart, why aren’t you rich?” Was another… The same sensibility drives the current “AI makes you dumber…” I think the core of this is that people don’t understand or value their OWN intelligence enough to keep trying to improve it. It’s either that or there’s just a bunch of us who have suckered ourselves into thinking “Books is good.” ;-)
Will we need to have the discipline to engage in 30 minutes of unassisted "thinking exercises" in the future, in the same way the invention of the engine and cars made it necessary for us to engage in 30 minutes of physical activity each day if we want to stay healthy and avoid muscle atrophy and weight gain?
That's a good analogy. Yes, I think that's likely. Although I kinda believe social media, video games, phones, etc. already force us to apply it! AI will increase the need for such discipline.
I have Crohn's disease, an incurable chronic health condition. Whilst it has been bleak, a silver outline is it has forced me to explore my awareness of myself, my experience, and my own life - in many difficult but meaningful ways. Similar to Patties comment, it's as if we need these awareness forcing functions that through necessity cause us to engage in deeper reflection. Maybe without the sheer difficulty of my experience I never would have woken up and realised I was merely existing and not living. This was a refreshing read Alberto, and one that swifty and precisely cuts to the heart of all of our experiences (or lack thereof).
I 100% agree. I don't think taking away the struggle would make us as happier as some people think. We need "awareness forcing functions" as you say (love that) to be alive. AI will help us directly in one sense (e.g. work faster), in another sense it's the catalyst for us to understand we have to gear into our lives with more intention, purpose, and determination. Hope you are well!!
I like this is theory. I rarely use AI to do anything but code and I have learned so much through this method that would have taken me far longer to learn on my own. True, if I had spent all this time learning the old fashioned way, or even the newer way of googling everything and reading books and articles, I would probably be more independent in a subset of the areas I’ve covered with AI, but what I have received in trade is an astonishing breadth that would have taken me an order of magnitude longer to achieve on my own. To be fair I’m an engineer with a 25 year career building a lot of systems so it could be argued that those neurons were already connected in me and I have been using AI to enhance those connections not create them in the first place.
That's exactly right. You had those connections already. AI is an enhancer for you. That is, to me, its primary value
Fascinating. The whole time until you got to it I was screaming inside, "what about moderation?" but of course that was your thesis all along, but applied to AI. Interestingly, I've been devouring Don Quixote for the first time, which is all about a guy who went mad from reading too much. In his case, too much of a particular genre, the stories of chivalric knights and their gallantry, until he found himself compelled to become their adventures himself.
I would say one should read much but across a great many types of literature, but in some ways that's exactly how AI trains. It might make you a good chatbot at parties, or a good go-to for advice for your friends and colleagues on a superficial level. My ADHD inclines me toward that path. Better would seem to be to read fairly deeply on a particular subject so that you become an expert system for responding to a particular type of prompt. More money in that at any rate.
Ultimately, I think you arrived at some very good advice. I've sometimes found myself thinking that AI is only valuable if I keep up with all the stuff that's written about it. That's why I subscribe to this newsletter! If I fall behind I'll just be like one of the great majority who knows just what they hear about it casually on social media or in the news, which is usually the worst of it. Perhaps I know and use it quite enough already, and I can get back to appreciating the miracle of consciousness while I have the colossal privilege of possessing it. OTOH - that miracle really makes me want to read more.
Hahaha exactly! The entire piece can be summarized in one word: moderation. Or perhaps restraint. I feel people panic too easily sometimes
Great points. I’m especially challenged by “You can soothe your worries by having faith in people…, The fear that overwhelms you is not a testament to your humanitarian spirit but a sign of condescending distrust.”
Coupling your post here with my other favorite of yours on “solitude,” maybe we should seek out platforms that facilitate human-to-human discussion around specific texts, i.e. Book clubs, bible studies, Comic Con(?). Substack seems the best virtual alternative, but I believe we will be reading Ai generated text in the Comments section as well. But maybe reading with the intention to engage is the goal we are trying to elevate over learning for the sake of learning, so make-up of the panel matters less
"I believe we will be reading Ai generated text in the Comments section as well" I hope you're wrong but I know you're right. I just hope that we can keep truly human writing at the top - although I'm not even sure about that anymore!
This was a banger. A clear, certified reading-party slapper. However, I felt like it ended right before the drop. Hook hooked me like I was a hungry salmon. I ate it with ease. Chewing. Cheeeewing. I was ENGAGED. And then, man, the dream stopped. I would love to read more.
Regardless, it was refreshing to read this piece. I love how shamelessly you explained rhetorical devices. For some reason I studied rhetoric, and for some reason - I don’t remember sh*t.
Do you have any nice source to remind myself how to use them :) ?
Cheeeers and thank you for your effort to write down your thoughts. Inspired me.
Haha thank you!! I could have written more but I believe sometimes it's better to be short than overextend! I don't have any resources - but you can ask ChatGPT!
Alberto, it’s better to be short, but in your case, you got that dawg in you. 🐕
If no resources lemme ask - how did you implement rhetoric to your writing?
Practice! And a lot of reading. I remember studying rhetorical devices long ago and I kinda have good memory so that helps as well!
This really is poetry. Thank you so much!
Would you mind if I ask what would it take to be able to write like you?
It’s absolutely amazing that you are able to write so well despite English isn’t your first language
Thank you!! I guess the easy answer is practice. And thoughtfulness - be thoughtful about how you write, not only what or why you write. Improve your method, think what you can improve, what others do that you can't, etc.
Thank you so much!!
Thanks a lot! Did you refer to any books on writing? Curious how did you first get started improving your writing haha
I've read a few but I'm not sure they were that important really (e.g. the famous ones like "on writing well" and "reading like a writer"). I would read them anyway to have a minimum base knowledge. But in general just be observant and conscious about what you read - about how what you read is written. I've also watched plenty of interviews or read articles on famous writers talking about their craft, which helps to place things into a general writing framework. The most valuable, however, is a folder I have written by myself that's called "Writing Craft" where I have everything I've learned by myself from doing the writing. I urge you to have a similar "commonplace book" about writing
Great essay! I stumbled over the fact that we have apparently decided ChatGPT is a he.. do we have a female llm as well?
Hahaha that was an easter egg
Alberto, your sentences glide like a skater's blade on ice. Precise, graceful, leaving clean, brilliant lines. What a joy to read such finely crafted writing.
Thank you Pascal 🙏🏻🙏🏻🙏🏻
very useful cautionary human-grounded insights, imo.
c.f. Ian Mcgilchrist book 'the sorcerer and his apprentice' w/ insights along similar lines< <><><>
One part of your comparison that doesn’t hold up: I don’t look down on rhapsodes, in fact I’m admiring what they did, as I’m currently reading a snapshot of their work, the Ὀδύσσεια. Nor do I look down on the rabbis of the pre-Talmudic oral tradition, nor the storytellers and lawgivers of indigenous peoples around the world. Nor on people who use Braille or audiobooks. Their brain is engaged in all cases.
I will not be around to care about 3,000 CE. But I can tell you here and now that my students using AI (LLMs, really) in my courses are not engaging their brains on learning the course material, but only on how to cut corners and how to achieve plausible deniability about their LLM use (at the latter of which, they suck). Of course, POSSIBLY some are using their liberated time to learn, say, quantum optics (spoiler alert: not), but then why enroll in my courses, which are all electives?
Writing seems to have started as a niche app, for things like accounting. Maybe LLMs can serve as one, too. But as for an everything-everywhere-all-at-once approach, why tell those who advocate precaution that they’re wrong? How the Hades can you or any other pundit KNOW? Why shouldn’t we consider this post just a sermon evangelizing a quasi-religion?
Can I ask how old are your students? I feel like my argument applies mostly for adults because students at a certain age don't really want to learn that much. I mean, I believe AI is an emphasizer of existing preferences more than anything. If you want to learn, you'll learn 10x. If you don't, you have one more way to cheat. I don't think it changes much the existing equilibrium of forces between one type and other of student
Thanks. They're in their 20s (upper-level undergrads). By no means all of my students use LLMs to pass off work as their own, but a growing minority each term since Fall 2022. However, for the past 7 years I've been teaching in an econ/ business faculty, and there has been a consistently higher base rate of cheating, even pre-LLM, than in my previous time in a law faculty. So maybe there is some sample bias.
I agree with you that there are students who are sincere about learning, and those who aren't. But these tendencies aren't digital, and I think there are some students who are kind of on the fence about yielding to temptation or not. What LLMs have done is to reduce the cost of crossing the fence, i.e. cheating. They achieve all the benefits of plagiarism without being as provable as plagiarism. So-called AI-detection software has too many false positives to be viable for academic discipline, if an instructor has a no-LLM policy on assignments.
My courses are in law, environmental ethics, technology & society, and what might be called a critical thinking course. So far, I haven't encountered any examples of course assignments that appear to have been enhanced in a good way by LLMs (apart from translation software, which is permitted: most of my students aren't native in English). Assignments I receive from my better students are almost always consistent with their classroom performance. OTOH, all students whom I've suspected of using LLMs for assignments, essays, etc. are unable to give an adequate explanation of what they wrote (much of which is wrong, due to hallucinations). So I'm not at all seeing the 10x-learning upside.
You mention your argument might be more applicable to adults: but there too I don't yet see the effects you propose. I speak as someone with fewer years ahead of me than behind me. E.g., I've seen enough errors every time I've used OpenAI or allowed my eyes to linger on a Google AI Summary to find that LLM use slows me down more than it speeds me up. (I'm agnostic about coding: I haven't yet used an LLM for that purpose.)
You may be right that we'll have this figured out by 3,000 CE, if some version of our civilization is still around at that time. But from a 2020's perspective, I think we may have gone a little too quickly with rolling this product out to the general public.
I think it's a profound mistake to frame any use of AI as "cheating" or "temptation" or "lack of sincerity". You can be an amazing student and a thorough learner using AI and reading books and mixing whatever it is that helps with your progress. The worst students - using AI or not - will be the worst students anyway. But otoh the best students are all using AI - one student who isn't will shortly fall behind because the value of stuff like Deep Research or even ChatGPT compounds over time (I'm of course not talking about letting them write for you or think for you, that's something to avoid). I use them all the time and I'm getting better at everything I do (including using AI). I'm simply very aware of finding balance between using AI and using my brain so I don't let it deteriorate.
Thanks. Apologies for the numbering, as there are several distinct issues involved:
1. Let me clarify, as regards student/teacher matters, I'm not concerned about all forms of AI, but about LLMs. I have some concerns about other forms of AI, especially other GenAIs, in a social sense, but that's not what we've been discussing. Certainly I'd never say all uses of AI are bad, e.g. discriminative AI.
2. As regarding student/teacher matters, there is an iceberg effect here (albeit not necessarily a 90-10 split). I only see the visible portion, i.e. work product that's submitted as assignments, exam answers, and presentations. And that's the main context in which I'm concerned with LLM usage. I of course can't attempt to limit what students use for research etc.
3. I haven't yet used LLM tools for research, but I don't see that being valuable other than for coming up with a list of references -- in essence Google with a more flexible query structure. From an LLM, I'd have to read each to make sure there aren't any hallucinations. Also, I'd have questions about whether the results reflect all, or the most important, references on a topic, whether they correctly classify the sources according to the positions they hold, etc. In short, I can't accept LLM output as reliable a priori, and I don't see how doing so can be justified, considering how LLMs work. OTOH, I'd have more faith in Google's output, because despite its limitations it has fewer hallucinations, if any, and therefore might require less time in the long run.
But as this relates to the submerged part of the iceberg, I'm more open-minded about this sort of use, esp. if LLM reliability is better than I expect.
4. Very little material in my courses is about imparting facts. Obviously there is some, such as basic vocabulary and definitions of concepts, etc. But mostly the course is about applying the concepts under conditions of extreme ambiguity: e.g., "What is the underlying ideology in this article from the Financial Times, and how is this illustrated by metaphors, narratives and framing used in the piece?," "Based on the facts above, how would you advise the legislature to act if they were to apply a biocentric deontological ethical point of view?", "Do the actions described above constitute a crime against humanity? Explain," etc.
It's possible, but in my view undesirable, to use an LLM to answer questions of this type. Also undesirable for students to prepare a script or slide pack for a presentation filled with words they can't define or conclusions they can't explain. Am I making a profound mistake in objecting to LLM use to generate these sorts of submitted work product?
5. I know some profs believe they should teach how to use AI in their courses, but I don't: I see my remit as getting students to, e.g., see for themselves why most treaties purporting to protect biodiversity fail, or understand the differences between categorical and propositional syllogisms, and the limitations of each. Am I on the wrong side of history here?
6. Finally, when I talk about "cheating," I mean subverting the fundamental purposes of the course, and/or being dishonest. Students using LLMs to replace thinking are doing the former, and students who deny using LLMs when clearly they have (as evidenced by wild hallucinatory stuff in their work product, false references, etc.) are doing the latter.
How to react to this is a separate matter: in practice, it means either grilling students whom I suspect of using LLMs about the substance of their work product, and giving them a lousy grade if they don't understand what they handed in, or just giving lousy work product a lousy grade. (Except that a false reference beyond a mere typo is a Fail.)
I understand that at most I can discourage LLM use in contexts where I believe it's inappropriate, not police it. Should I give up on that? Thanks again.
1. Discriminative AI can be worse than generative AI in many ways. The thing is that it's in fashion to go against the latter because it's newer. Anyway, when I say AI I use it to refer to whatever AI we're talking about. So LLMs in this case.
2. and 3. You approach this new technology with a deep bias, it's clear just from your wording "I don't see that being valuable other than..." You probably just don't know, which is fine, but then your attitude should be of open-mindedness and curiosity and awareness that things change fast and priors should be updated, etc. (I'm glad you mention that in the end.) I understand: there are plenty of people calling out the flaws in LLMs, etc. It's worth revisiting those biases before taking a public stance on it. Just for example, Google (which you seem to trust more) has a different kind of bias, which is how it ranks webpages. That should be of *more* not less concern because it's absolutely opaque - Google makes reality. In contrast, LLMs give you the sources and then you *go to Google to check*. But interestingly, that's not a problem because Google is already deeply ingrained in the way we do things.
4. I agree with you. I'd do the same here. Students shouldn't be prohibited from using LLMs but encouraged to find the use cases that enhance their learning rather than hinder it. If they don't care either way, then they probably were never good students to begin with and shouldn't be devoted much energy. We should focus on the people who truly want to participate of this society. And that requires, as a prerequisite, being willing to make the effort to learn how the world works.
5. Encouraging students to learn about new technology is a good thing but perhaps not your job. I guess it depends on what you teach. And how. I won't go into that.
6. I believe this is the crux of the problem. Should you punish cheaters? Do you have the ability to catch cheaters without punishing, by mistake, a honest student? Are students using this gap in detection systems to bypass your vigilance? It's not easy. I wouldn't like to be on your shoes. My take on this is that teachers and professors should try to devote their energy to the people who want to make progress as students, be it with whatever tools they use. I understand it's much easier said than done, but that's what I'd try to figure out. I think it's better to never give any attention to cheaters than to mistakenly punishing a honest student. Those should be cherished and protected and encouraged at all costs. The matter about grades is what makes it harder: should you give a good grade to someone you suspect of having cheated but couldn't catch? The answer is yes. Like you'd do normally, without AI. When I talk about "devoting energy" I talk about things other than grades. I imagine in practice this is harder than I'm making it seem here. I'd love to hear your take on whether this might work or it's too much of a simplification. Thank you for engaging (and sorry for the harsh tone in some of my responses haha)
I think we agree. Punishment is not my calling in regard to teaching, and I certainly don't want to punish an honest student. The easiest way to avoid doing that is just to give a lousy evaluation to lousy work product across the board.
Even when one is sure that a student has used AI to subvert the purpose of the course, there can be diminishing returns to coming out directly and confronting the student with this. In the pre-AI era, I did have a few cases where students were sharing essays, or plagiarizing stuff from the Internet. This could be traced by the propagation of errors and distinctive diction through several essays, or by doing an Internet search on unusually eloquent language coming from a student with limited English abilities. Usually these led to tearful confessions.
In the AI era, there's lower cost for a student to stonewall and deny, deny, deny. (As well as some famous role models for this behavior.) So the best I can do is to let the student know that I know they don't understand what they've handed in. Even if the quality of the work is OK on its face in an isolated sense, if the student can't show they know what they're talking about, that's a bad evaluation. And of course, made-up references are simple dishonesty, so that's a pretty fast road to failure.
On your more general remarks, certainly I didn't mean to give discriminative AI a clean bill of health for everything, but the dangers are more application-dependent: credit or benefits decisions, some medical applications, etc. can be problematic, finding lost cities in the jungle or identifying rare bird species in a morning chorus are usually less so.
Same is true with LLMs, of course: maybe coding and finding references are more accurate (agnostic about this) -- but for the kinds of skills and concepts I'm teaching, their use for generation of final work product isn't appropriate.
Apropos of using LLMs to write, there was an apt and wonderful quote from the late novelist Tom Robbins (passed away recently at age 93) in today's obituary in the Guardian: "[L]anguage is not the frosting, it's the cake."
While many employers don't share Robbins's view, college education shouldn't be solely about employment, it's about life. At least, that's how I was taught, and how I try to teach (we call ourselves a liberal arts university). It seems like a mistake to abandon Robbins's insight just because Sam Altman made a unilateral decision in November 2022. Thanks for engaging, too, and no worries about the tone.
I was thinking something similar. Take Bob Dylan as an example. extremely well-read, and turned the ideas he'd ingested into a great abundance of poetry set to music, and is now one of the most appreciated and listened to performers of all time. In music it's a truism that copying is bad but you have to steal. Our great ideas in science are built upon the foundations of all that came before. Should every scientist work from first principles and ignore the literature of their field? That way yields crackpots and con artists.
As with most things, a mix of abundant context and original thinking is the best way to progress in thought, art, literature, and science. Studying and utilizing AI should be no different.
Exactly. 100%. You should read a lot. You should think a lot. You should use AI a lot. You should also write yourself a lot (or do whatever it is that AI is doing for you). Otherwise you risk falling behind for not leveraging useful new stuff or deteriorating your inherent skills by relying too much on external tools.
Some questions about the "falling behind" narrative: Should artists give up painting in oils? Drawing with charcoal? Making woodcuts? Should they all use AI in their work? Should pianists rely on AI to tell them where to use pedal, rubato, and pianissimo? Should they stop using their fingers altogether?
A slogan from one of the great technological thinkers from the 1960s is, "The medium is the message." What you say will be changed by the tools you use. Maybe there are those who prefer expressing themselves using older tools and techniques. What's wrong with that?
And what is the underlying race in which one is "falling behind"? Why assume there is urgency and competition in everything?
I think it was Newton who said “I stand on the shoulders of giants.”
The deeper I get into working with AI, the more I appreciate your writing! This post got me thinking about the people I grew up with (in the decidedly analog end of the 20th century) who actually dismissed the idea of reading books in general. “Get your head out of the books” was an actual saying. “If you’re so smart, why aren’t you rich?” Was another… The same sensibility drives the current “AI makes you dumber…” I think the core of this is that people don’t understand or value their OWN intelligence enough to keep trying to improve it. It’s either that or there’s just a bunch of us who have suckered ourselves into thinking “Books is good.” ;-)