"At the time, I was constantly reprimanded for it; didn’t feel like a higher practice at all.) When I was not having fun with my friends in class—also reprimand-worthy behavior—I used to spend the hours looking through the window, contemplating the world beyond the sill, the trees, and the distant buildings, and projecting onto the shape-shifting clouds the thoughts that were otherwise encapsulated, trapped within those four walls and the shrunken minds dwelling inside.
And I turned out fine. I think"
Been there, done that! Glad to know I am not the only one!
I appreciate your column. I had D average in high school. At 15, I was too busy reading Bertrand Russell & Thomas Kuhn abt modern science. First 2 years college=1.9 GPA. Second 2 years= 3.9. (Learned I loved to learn!)
Got Ph.D in Comm Research in 1987.
As a writer I use AI for research sources and writing draft scenes and notes a speaking styles. For me, it’s useful to read its suggestions but cannot provide finished material…at least not yet.
I already have a Masters degree from 10 years ago. Now I am studying a postgraduate course in another discipline. AI is definitely a positive tool to help me learn and to help me complete the assignments. It’s a productivity enhancing tool, which allows me to finish an assignment in less than half the time it would otherwise have taken. I do wonder what university studies will be like for my daughter when she gets there and how the tools available in 15 years time will be different
I'm a current student, so I've found your advice particularly profound, but inspiring.
AI is first of all, I—Intelligent. Whatever ChatGPT can do, a human can do too. Is there one human who will beat ChatGPT at everything? Absolutely not, but if I wanted someone to teach me about Software Engineering, I can bring up ChatGPT, or I could go to my friend in Software Engineering and he can offer a similar service.
ChatGPT is nothing new. If I wanted someone to do my assignment for me, I could just ask the smart kid to do it. I could search up the assignment on the internet.
So the principle of offloading your assignments, your learning, to something else isn't new. But it is a lot easier. If I looked it up on the internet, I'd know the info is sketchy, so I'd have to look it over, review it, reflect based on my own knowledge and adjust accordingly. With AI, it's good enough that I don't "really" have to check. (AI Hallucinations exist, but at the undergraduate level, most AI are serviceable). Then, we have a grave failure in the system.
I loved the point you made about allowing failure—allowing someone to create their own path, but without failure—without feedback, we're sleepwalking through life.
Using AI for assignments is obviously wrong, at the very least because it's dishonest, but if we've reached a point where AI can consistently do the work, then what's the point of learning the content? Maybe we should adjust the curricula to allow failure even with AI. Perhaps we can ask more ambitious questions? Instead of a single worksheet, we could ask someone to think of a larger system—using AI to solve debugging problems, but larger architectural decisions need to be made by you. Maybe we need to ask more of our students to the point where they do fail, at the frontier of AI's capabilities, and allow an opportunity for students to grow and develop. Some will fail, some will succeed, but like you said, we don't have minute, atomic control over everyone, nor should we want it.
I appreciate your fairly thoughtful and insightful notes. Thinking about berkley comment at the start gets me to 2 points
1- how much of academia is still about the ability to hold and regurgitate information versus understanding the basic concepts and critically think? I'm not sure failing is indicative of people who not understanding the basic concepts in the course, but rather a poor measurement system (grading)
2- Is this pointing to an increased divergence between the system that trains technical people on knowledge versus the needs of industry for those technical people to synthesize and integrate often disparate areas of knowledge?
AI is clearly putting a finger on the scale here, but it is not obvious to me which way it is tipping the scales. Thanks for sharing your thoughts and the post!
The problem isn't the use or abuse of AI, nor is it laziness enabling. AI might be useful for some tasks for nerds. And AI can be useful for tasks humans can't do.
That being said, for everything else, and the vast majority of people, the more you use AI, just like GPS maps for driving, the more you forget how to do things, and think, without.
And it's only going to get worse. And given the commercial interests behind it, we're on a one way highway in an unstoppable carcass of a civilization caught up in its own hubris.
And not to go all woo science-fiction, but there's anecdotal evidence of weird AI, from hallucinations all the way to the spiritual realm.
Nice write up, although the rant did go a bit off topic. But I do agree that being a good student at school or uni doesn't make a person intelligent.
You expect students to realize something their own teachers don't know, and more importantly, can't do anything about? I want to be optimistic, and I think posts like yours and conversations like ours can help, but it might be too little too late. It's like smartphones and social media and their use by kids and adolescents. The evidence of its destructive effects are all around us, and yet...
I don't think it's like smartphones and social media *at all*. Drawing that analogy does more harm than good. AI is more like the internet, a balanced double edged sword. Social media has little educational value (although there's some).
Do I expect students to figure out something their teachers don't? Yes. As it has been forever and will forever be. Students are better children of their time than their teachers, incapable of keeping up with the times and constrained by incentives that not necessarily align with the students (e.g. follow the unidimensional road instead of free exploration according to one's natural curiosities)
If by “portal to higher dimensions” we mean “a fast track to life as a thief, a cheat and a liar”…then yes. AI is certainly that portal.
The wretchedness of these pro-ChatGPT arguments always make me ill. Having a robot do your homework is exactly as good for your mind as having a robot do your workout would be for your body.
Sometimes doing the work yourself. Is. The whole. Fucking. Point. Whether you succeed or fail in your homework tasks is almost irrelevant; what matters is that mental exercise changes your brain. Real learning literally creates new pathways and connections in the brain.
The absence of those new pathways is what is revealed by those abysmal test scores. What it reveals is that an entire classroom full of people had the robot lift weights for them-and according to recent studies it is arguably the case that after a semester of using an AI in school, these kids are actually, measurable weaker in every cognitive capacity that can be objectively measured.
I'm not advocating for what you say. But for freedom and curiosity and independent decision making on the student's part. AI is not good per se. It's a double-edged sword. There are better and worse ways to use it. Let students be and make mistakes and whatnot instead of imposing your point of view on them. They don't want to be patronized. And you shouldn't. If students choose the "wrong" path at least they chose themselves. If AI is a catalyst of character (which I still think it is) then those students didn't have the character to begin with. Let others be the giants of the future.
During the Y2K remediation scramble we scrambled through, I was on a gig out a bank that used AIX for Unix, and being the only dev with any network experience, I was hand a project to provision the servers, install SWIFT and write a scheduling system. I had some unix, was mostly Windows, Novell but was told to just do it.
One of the admins came by to help me with an issue, and he saw that I was using Notepad to edit script files. He was a grizzled, older Unix guy, lots of experience.
"Dude, learn vi!"
"Why - I'm faster on my laptop and I have a deadline."
"Because there will be a time, maybe at 2 am, where you won't have Notepad. And you have to get the job done."
He was right. Over reliance on a crutch never makes you Bond, Neo, doesn't even make you cool.
"[A]verage is not a perfect metric. The best students could be doing better than ever yet the worst doing so badly that they’d be driving the average down anyway."
You're absolutely right. We profs don't understand this. And we don't know anything about outliers, medians, etc. Esp. those who teach STEM topics. Our bad.
Until you teach, especially undergraduates, you will not have any idea of how much effort a sincere prof puts into their teaching. Nor, speaking in general, how painful it is to just "let [students] be": your job is to help improve their lives and insightfulness in some way. This is not to deny that there are some individual students one wants to see the back of ASAP -- but even with those individuals I think most teachers will make some effort before giving up. Every colleague of mine at my university, including those who teach CS courses about AI, is disturbed about students offloading learning to LLMs.
I understand the disturbance. What I see, from my long experience as a student who has had many teachers (is not that I don't know what I'm talking about), is that teachers think they know more than students. They always think that. They don't always do (basically because students won't tell you most of the time, and because you're one for so many at a time). Would you be willing to consider that some people may be using AI in ways that's enhancing their education? Or that's not a possibility? Well, if that is a possibility - that's what I'm talking about. I'm not saying "sit down as you watch how they destroy their future." I'm saying "let go of your control cravings." It's a hard exercise but one worth doing. For your own sanity with the times that are coming. Also, like a parent, a teacher should let students make mistakes. You can't learn for them what you're protecting them from learning.
Alright. You appear to have some issues with teachers. You cannot recognize your condescension toward teachers, both in your remarks about lack of understanding of averages, and in your generalization about teachers “always” thinking that they know more than their students. And this one: “I'm saying ‘let go of your control cravings.’ It's a hard exercise but one worth doing”: a little patronizing, maybe? Are these arguments where your emotion has NOT overridden your reason?
You also fail to consider that most teachers have long experience as a student as well. We've experienced both roles in the classroom, which you have not yet done. I grant that many teachers forget their student experience, and/or hate having to teach. Neither of these happens to be true in my case. One of the reasons I went into teaching in my late 50s was specifically to learn from how younger people think — their ideas and processes. And in my undergrad days, I sought shortcuts -- especially in STEM courses -- just as many of my students do now. So I understand that POV.
I don’t have any illusion that I can police LLM use by my students. But I teach small classes of 8-20 students, and I can see individually which students are engaged and which ones are dialing it in, or less. The proportion of the latter has increased since the introduction of LLMs to the public. I don’t rely on ”eyeballing” their written work: I know that my eye is not infallible, and I know the risks of wrongly accusing someone of using AI to write an assignment. So how can I suspect that many students "zone out" and rely on LLMs to get them through the course? Because many of my students, especially the ones I suspect most to have relied on AI, never can give adequate answers to oral follow-up questions about the subject matter of their written work product. My colleagues have similar experiences.
I not only see less command over subject matter, but less ambition. I allow my students to choose more difficult tranches of questions on their final oral exam, if they want a better grade in the course: "Economy Class" (pass), "Business Class" (up to B+), and "First Class" (A-series). During the pandemic years I had many students try for Business and First Class. Since ChatGPT, even my best students rarely try even for Business Class. Correlation isn't causation, but I suspect a lack of confidence, since no LLM is available during the oral exam.
You claim to be using AI to enhance your education. Let’s assume you really are, and other students are as well. That isn't what I'm seeing, based on the evidence mentioned above. So at best we have a half-empty, half-full situation (though I think 50% enhancing their education through LLMs may turn out to be an overestimate during these early years). As a promoter of LLM use, you emphasize the full. As teachers of students who use LLMs to shut down their thinking, my colleagues and I focus on the empty.
My frustration with this behavior isn’t because I can’t control it. I accept that I can’t. It’s because of the impending tragedy for the world when so many people surrender their thinking to the very non-convivial tools that are LLMs. (A world, BTW, that I might not be too dead to experience.) All the more so because these tools are run by oligarchs who believe in political expedience rather than justice.
We already have seen how a Chinese LLM will refuse to divert from the CCP line regarding certain sensitive topics, if it entertains them at all. And we’ve seen how some US social media platforms have modified their processes to conform to the tilt of the current US regime. Check LinkedIn, and see all the people oohing and aahing in the “likes” about increasingly realistic GenAI deepfake videos; very few comments express any anxiety about misuse for political purposes — even though Elon Musk doing various down-to-earth, likable things is a favorite trope of some of them.
When LLMs obey dictators, are they still as beneficent as you think? This is daily becoming a practical, not a theoretical, question.
Please consider that SOME teachers may know better than students about SOME things, because of longer and more varied life experience. Autocracy may be new to many people in their 30s and younger, but many older people have a better idea of what it means, and how it can persist. It's the desire to avoid an LLM-facilitated sleepwalk into tyranny, and not a desire to control, that explains why at least some of us want students to wake up and think.
I'll keep this brief. The fact that you interpreted my comment about averages as a dunk—rather than a clarification for readers who might not know the nuance (not everyone here is a teacher, let alone a STEM teacher)—says quite a bit. It seems you've assumed bad intent on my part, likely because this topic strikes a personal chord. If so, I’m sorry. That said, I stand by everything I wrote. (And for the record, I’ve had great teachers in my life; I hold no grudge against the profession.) Wishing you the best in your teaching.
I interpreted it as a dunk, because your comment took it for granted that the prof who commented on the lowered average in his class made an elementary mistake about means, and would not have already discounted situations involving outliers or wide dispersion had such been apposite. I’d have given him the benefit of the doubt, that he wouldn’t have reported the reduction in average if the data had been so obviously biased. You also emphasized that all teachers believe they know more than their students.
I don’t know why you assume that the means issue strikes a personal chord with me; that is simply a gratuitous insult.
You're so fixated on the average that you're missing the point. I didn’t write that to dunk on Jain or any other teacher. I’m sure he understands averages and outliers. My comment wasn’t about him, but about the broader issue: there’s far more uncertainty around AI’s impact on education than posts like these suggest. I also used it to introduce the idea of a “catalyst of character.” Also, yes, I think "teachers believe they know more than their students."
"At the time, I was constantly reprimanded for it; didn’t feel like a higher practice at all.) When I was not having fun with my friends in class—also reprimand-worthy behavior—I used to spend the hours looking through the window, contemplating the world beyond the sill, the trees, and the distant buildings, and projecting onto the shape-shifting clouds the thoughts that were otherwise encapsulated, trapped within those four walls and the shrunken minds dwelling inside.
And I turned out fine. I think"
Been there, done that! Glad to know I am not the only one!
We're not alone! And sure enough (which I didn't want to mention in the article) I got straight As throughout my high school years
I appreciate your column. I had D average in high school. At 15, I was too busy reading Bertrand Russell & Thomas Kuhn abt modern science. First 2 years college=1.9 GPA. Second 2 years= 3.9. (Learned I loved to learn!)
Got Ph.D in Comm Research in 1987.
As a writer I use AI for research sources and writing draft scenes and notes a speaking styles. For me, it’s useful to read its suggestions but cannot provide finished material…at least not yet.
Exactly. Such a unique path. Unrepeatable. Unpredictable. Thanks JM!
I already have a Masters degree from 10 years ago. Now I am studying a postgraduate course in another discipline. AI is definitely a positive tool to help me learn and to help me complete the assignments. It’s a productivity enhancing tool, which allows me to finish an assignment in less than half the time it would otherwise have taken. I do wonder what university studies will be like for my daughter when she gets there and how the tools available in 15 years time will be different
Good question. Impossible to answer!! Glad you've found a balanced tradeoff between using AI and learning yourself. That's the correct attitude
I'm a current student, so I've found your advice particularly profound, but inspiring.
AI is first of all, I—Intelligent. Whatever ChatGPT can do, a human can do too. Is there one human who will beat ChatGPT at everything? Absolutely not, but if I wanted someone to teach me about Software Engineering, I can bring up ChatGPT, or I could go to my friend in Software Engineering and he can offer a similar service.
ChatGPT is nothing new. If I wanted someone to do my assignment for me, I could just ask the smart kid to do it. I could search up the assignment on the internet.
So the principle of offloading your assignments, your learning, to something else isn't new. But it is a lot easier. If I looked it up on the internet, I'd know the info is sketchy, so I'd have to look it over, review it, reflect based on my own knowledge and adjust accordingly. With AI, it's good enough that I don't "really" have to check. (AI Hallucinations exist, but at the undergraduate level, most AI are serviceable). Then, we have a grave failure in the system.
I loved the point you made about allowing failure—allowing someone to create their own path, but without failure—without feedback, we're sleepwalking through life.
Using AI for assignments is obviously wrong, at the very least because it's dishonest, but if we've reached a point where AI can consistently do the work, then what's the point of learning the content? Maybe we should adjust the curricula to allow failure even with AI. Perhaps we can ask more ambitious questions? Instead of a single worksheet, we could ask someone to think of a larger system—using AI to solve debugging problems, but larger architectural decisions need to be made by you. Maybe we need to ask more of our students to the point where they do fail, at the frontier of AI's capabilities, and allow an opportunity for students to grow and develop. Some will fail, some will succeed, but like you said, we don't have minute, atomic control over everyone, nor should we want it.
Thank you for these reflections Wasay!
I appreciate your fairly thoughtful and insightful notes. Thinking about berkley comment at the start gets me to 2 points
1- how much of academia is still about the ability to hold and regurgitate information versus understanding the basic concepts and critically think? I'm not sure failing is indicative of people who not understanding the basic concepts in the course, but rather a poor measurement system (grading)
2- Is this pointing to an increased divergence between the system that trains technical people on knowledge versus the needs of industry for those technical people to synthesize and integrate often disparate areas of knowledge?
AI is clearly putting a finger on the scale here, but it is not obvious to me which way it is tipping the scales. Thanks for sharing your thoughts and the post!
The problem isn't the use or abuse of AI, nor is it laziness enabling. AI might be useful for some tasks for nerds. And AI can be useful for tasks humans can't do.
That being said, for everything else, and the vast majority of people, the more you use AI, just like GPS maps for driving, the more you forget how to do things, and think, without.
And it's only going to get worse. And given the commercial interests behind it, we're on a one way highway in an unstoppable carcass of a civilization caught up in its own hubris.
And not to go all woo science-fiction, but there's anecdotal evidence of weird AI, from hallucinations all the way to the spiritual realm.
Nice write up, although the rant did go a bit off topic. But I do agree that being a good student at school or uni doesn't make a person intelligent.
Thanks. I agree. We should avoid over reliance in AI. Nothing good will come out of that. Hope the students who matter realize that.
You expect students to realize something their own teachers don't know, and more importantly, can't do anything about? I want to be optimistic, and I think posts like yours and conversations like ours can help, but it might be too little too late. It's like smartphones and social media and their use by kids and adolescents. The evidence of its destructive effects are all around us, and yet...
I don't think it's like smartphones and social media *at all*. Drawing that analogy does more harm than good. AI is more like the internet, a balanced double edged sword. Social media has little educational value (although there's some).
Do I expect students to figure out something their teachers don't? Yes. As it has been forever and will forever be. Students are better children of their time than their teachers, incapable of keeping up with the times and constrained by incentives that not necessarily align with the students (e.g. follow the unidimensional road instead of free exploration according to one's natural curiosities)
If by “portal to higher dimensions” we mean “a fast track to life as a thief, a cheat and a liar”…then yes. AI is certainly that portal.
The wretchedness of these pro-ChatGPT arguments always make me ill. Having a robot do your homework is exactly as good for your mind as having a robot do your workout would be for your body.
Sometimes doing the work yourself. Is. The whole. Fucking. Point. Whether you succeed or fail in your homework tasks is almost irrelevant; what matters is that mental exercise changes your brain. Real learning literally creates new pathways and connections in the brain.
The absence of those new pathways is what is revealed by those abysmal test scores. What it reveals is that an entire classroom full of people had the robot lift weights for them-and according to recent studies it is arguably the case that after a semester of using an AI in school, these kids are actually, measurable weaker in every cognitive capacity that can be objectively measured.
I'm not advocating for what you say. But for freedom and curiosity and independent decision making on the student's part. AI is not good per se. It's a double-edged sword. There are better and worse ways to use it. Let students be and make mistakes and whatnot instead of imposing your point of view on them. They don't want to be patronized. And you shouldn't. If students choose the "wrong" path at least they chose themselves. If AI is a catalyst of character (which I still think it is) then those students didn't have the character to begin with. Let others be the giants of the future.
e.g., the electronic calculator enhanced great students, but made poor students worse students.
During the Y2K remediation scramble we scrambled through, I was on a gig out a bank that used AIX for Unix, and being the only dev with any network experience, I was hand a project to provision the servers, install SWIFT and write a scheduling system. I had some unix, was mostly Windows, Novell but was told to just do it.
One of the admins came by to help me with an issue, and he saw that I was using Notepad to edit script files. He was a grizzled, older Unix guy, lots of experience.
"Dude, learn vi!"
"Why - I'm faster on my laptop and I have a deadline."
"Because there will be a time, maybe at 2 am, where you won't have Notepad. And you have to get the job done."
He was right. Over reliance on a crutch never makes you Bond, Neo, doesn't even make you cool.
It makes you dependent.
“Wildflowers spring from good and bad intentions alike.”
So, apparently, do paperclips.
"[A]verage is not a perfect metric. The best students could be doing better than ever yet the worst doing so badly that they’d be driving the average down anyway."
You're absolutely right. We profs don't understand this. And we don't know anything about outliers, medians, etc. Esp. those who teach STEM topics. Our bad.
Until you teach, especially undergraduates, you will not have any idea of how much effort a sincere prof puts into their teaching. Nor, speaking in general, how painful it is to just "let [students] be": your job is to help improve their lives and insightfulness in some way. This is not to deny that there are some individual students one wants to see the back of ASAP -- but even with those individuals I think most teachers will make some effort before giving up. Every colleague of mine at my university, including those who teach CS courses about AI, is disturbed about students offloading learning to LLMs.
I understand the disturbance. What I see, from my long experience as a student who has had many teachers (is not that I don't know what I'm talking about), is that teachers think they know more than students. They always think that. They don't always do (basically because students won't tell you most of the time, and because you're one for so many at a time). Would you be willing to consider that some people may be using AI in ways that's enhancing their education? Or that's not a possibility? Well, if that is a possibility - that's what I'm talking about. I'm not saying "sit down as you watch how they destroy their future." I'm saying "let go of your control cravings." It's a hard exercise but one worth doing. For your own sanity with the times that are coming. Also, like a parent, a teacher should let students make mistakes. You can't learn for them what you're protecting them from learning.
Also, I won’t reply next time if you open with that kind of snarky tone. I’m all for a good debate, but not when emotion overrides reason.
Alright. You appear to have some issues with teachers. You cannot recognize your condescension toward teachers, both in your remarks about lack of understanding of averages, and in your generalization about teachers “always” thinking that they know more than their students. And this one: “I'm saying ‘let go of your control cravings.’ It's a hard exercise but one worth doing”: a little patronizing, maybe? Are these arguments where your emotion has NOT overridden your reason?
You also fail to consider that most teachers have long experience as a student as well. We've experienced both roles in the classroom, which you have not yet done. I grant that many teachers forget their student experience, and/or hate having to teach. Neither of these happens to be true in my case. One of the reasons I went into teaching in my late 50s was specifically to learn from how younger people think — their ideas and processes. And in my undergrad days, I sought shortcuts -- especially in STEM courses -- just as many of my students do now. So I understand that POV.
I don’t have any illusion that I can police LLM use by my students. But I teach small classes of 8-20 students, and I can see individually which students are engaged and which ones are dialing it in, or less. The proportion of the latter has increased since the introduction of LLMs to the public. I don’t rely on ”eyeballing” their written work: I know that my eye is not infallible, and I know the risks of wrongly accusing someone of using AI to write an assignment. So how can I suspect that many students "zone out" and rely on LLMs to get them through the course? Because many of my students, especially the ones I suspect most to have relied on AI, never can give adequate answers to oral follow-up questions about the subject matter of their written work product. My colleagues have similar experiences.
I not only see less command over subject matter, but less ambition. I allow my students to choose more difficult tranches of questions on their final oral exam, if they want a better grade in the course: "Economy Class" (pass), "Business Class" (up to B+), and "First Class" (A-series). During the pandemic years I had many students try for Business and First Class. Since ChatGPT, even my best students rarely try even for Business Class. Correlation isn't causation, but I suspect a lack of confidence, since no LLM is available during the oral exam.
You claim to be using AI to enhance your education. Let’s assume you really are, and other students are as well. That isn't what I'm seeing, based on the evidence mentioned above. So at best we have a half-empty, half-full situation (though I think 50% enhancing their education through LLMs may turn out to be an overestimate during these early years). As a promoter of LLM use, you emphasize the full. As teachers of students who use LLMs to shut down their thinking, my colleagues and I focus on the empty.
My frustration with this behavior isn’t because I can’t control it. I accept that I can’t. It’s because of the impending tragedy for the world when so many people surrender their thinking to the very non-convivial tools that are LLMs. (A world, BTW, that I might not be too dead to experience.) All the more so because these tools are run by oligarchs who believe in political expedience rather than justice.
We already have seen how a Chinese LLM will refuse to divert from the CCP line regarding certain sensitive topics, if it entertains them at all. And we’ve seen how some US social media platforms have modified their processes to conform to the tilt of the current US regime. Check LinkedIn, and see all the people oohing and aahing in the “likes” about increasingly realistic GenAI deepfake videos; very few comments express any anxiety about misuse for political purposes — even though Elon Musk doing various down-to-earth, likable things is a favorite trope of some of them.
When LLMs obey dictators, are they still as beneficent as you think? This is daily becoming a practical, not a theoretical, question.
Please consider that SOME teachers may know better than students about SOME things, because of longer and more varied life experience. Autocracy may be new to many people in their 30s and younger, but many older people have a better idea of what it means, and how it can persist. It's the desire to avoid an LLM-facilitated sleepwalk into tyranny, and not a desire to control, that explains why at least some of us want students to wake up and think.
I'll keep this brief. The fact that you interpreted my comment about averages as a dunk—rather than a clarification for readers who might not know the nuance (not everyone here is a teacher, let alone a STEM teacher)—says quite a bit. It seems you've assumed bad intent on my part, likely because this topic strikes a personal chord. If so, I’m sorry. That said, I stand by everything I wrote. (And for the record, I’ve had great teachers in my life; I hold no grudge against the profession.) Wishing you the best in your teaching.
I interpreted it as a dunk, because your comment took it for granted that the prof who commented on the lowered average in his class made an elementary mistake about means, and would not have already discounted situations involving outliers or wide dispersion had such been apposite. I’d have given him the benefit of the doubt, that he wouldn’t have reported the reduction in average if the data had been so obviously biased. You also emphasized that all teachers believe they know more than their students.
I don’t know why you assume that the means issue strikes a personal chord with me; that is simply a gratuitous insult.
You're so fixated on the average that you're missing the point. I didn’t write that to dunk on Jain or any other teacher. I’m sure he understands averages and outliers. My comment wasn’t about him, but about the broader issue: there’s far more uncertainty around AI’s impact on education than posts like these suggest. I also used it to introduce the idea of a “catalyst of character.” Also, yes, I think "teachers believe they know more than their students."