What You May Have Missed #15
ChatGPT from the sociocultural perspective / ChatGPT from the business perspective / Miscellanea (because, believe it or not, there’s life beyond ChatGPT)
I didn’t publish the WYMHM column yesterday because I came back from a trip and was too tired. Today I finished it so here it is!
ChatGPT from the sociocultural perspective
AI to benefit us all
The most thought-provoking news on ChaGPT (and OpenAI) came from TIME’s tech reporter Billy Perrigo:
I’ve read different takes on the economics of this. Some say the salary is low whereas others say that it’s a high salary by Kenyan standards. Although I think it’s an important aspect to discuss, I won’t go into the ethics of paying people that misery (even if higher than the living standard of the country), because I don’t think that’s the stronger argument.
As Perrigo explains on Twitter, workers “said they were mentally scarred by what they read, and described mental health support as both inadequate and difficult to obtain.” It’s harder to counterargue this, isn’t it? Doesn’t OpenAI see the irony? They’re employing workers who go through that horrible task daily to eventually build AGI to benefit “all of humanity.”
(Timnit Gebru, Founder of DAIR, and others wrote a few months ago a great piece on this topic if you’re interested: “The Exploited Labor Behind Artificial Intelligence.”)
Is ChatGPT a good student?
People are going crazy with ChatGPT’s ability to pass hard exams and tests.
It seems to pass the Wharton MBA exams (or, if you read the paper, just a few questions). Professor Christian Terwiesch, who authored the paper, writes that “this has important implications for business school education, including the need for exam policies [and] curriculum design focusing on collaboration between humans and AI...”
Yet, in the same essay, he advised against using ChatGPT mindlessly: “Be mindful of what Chat GPT3 can and cannot do … we should not forget that it made major mistakes in some fairly simple situations.”
Takes like the one below, which get dozens of thousands of likes is how the gap between those who know how AI systems work and those who don’t widens (This is going to be as problematic as widespread bad uses like using language models as sources of information). (I still agree with the second sentence, though.)
ChatGPT seems to also pass the United States Medical Licensing Examination (USMLE). The “results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making,” the authors conclude. Same thing.
I agree with Dr. Heidy Khlaaf here:
Time for adaptation
But maybe it’s not so disparate to consider overhauling education. That’s how Sam Altman thinks we’ll adapt to these technologies (full interview with Connie Loizos for Strictly VC):
“We adapted to calculators and changed what we tested for in math class … this is a more extreme version of that, no doubt, but also the benefits of it are more extreme as well.”
Although I understand where he’s coming from, I don’t buy his analogy. Calculator designers know very well how the elements work—individually and together—to form the whole that is a calculator. Also, they can 100% reliably predict the calculator’s behavior from the inputs. Neither is true for ChatGPT, and that’s a big problem—one that may exceed its “more extreme benefits.”
But that’s a conversation about how the world should be. In this world, ChatGPT has gone viral and universities are starting to revamp how they teach. NYT’s Kalley Huang reports the story of Antony Aumann, a professor of philosophy at Northern Michigan University, who caught a student using ChatGPT to write what he considered “the best paper in the class.” Huang writes:
“Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Mr. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.”
Professor Twewiesch, who evaluated ChatGPT on the Wharton exam, is also considering alternatives:
“It is now up to us to determine what to do with this increased productivity. In my view, we should return it to the students in the form of extra meetings outside class, personal attention, joint social activities, or the design of new course materials.”
Education may not look anything like today’s in a few years.