26 Comments
Aug 3, 2023Liked by Alberto Romero

Fantastic, thanks for another great post, Alberto.

One piece that teachers and administrators NEED before the 2023-2024 academic year is clear guidelines from style gurus at APA, MLA, Chicago, etc about how to cite/attribute AI-generated work. There are some initial blog posts on the topic, but we need definitive guidelines so we can approach GenAI work with full transparency: "so you used, GenAI, this is how you cite it".

An interesting argument in this arena is what is being cited -- there is no 'person' to attribute a ChatGPT response to, and no way for a reader to go and find and check the response even if a citation is given. Attributing text to the LLM seems reasonable, but there is little precedent for attributing original work to a non-human entity.

As a baseline, one compelling idea is to start requiring student essays to include appendices of GenAI prompts used and the text responses received back.

Would love your thoughts on this in a future post, Alberto. And if anyone in the reading community has working attribution guidelines that students could use in essays, please share.

Expand full comment
author

Important topic! I see the problems you mention, too. It'd be great to explore the topic more in-depth.

Expand full comment

Max,

I have give your style question some thought:

I think we can figure this one out in advance of actual movement by the style guides. Well... we kinda have to, don't we?

Traditionally, we cite text we are quoting, paraphrasing, relying on in some essential way. If we train our students right, they will not be quoting from ChatGPT. It produces tertiary text, scrubbed of all referentiality to actual sources.

So, the question becomes: Is a works cited entry the proper place to show reliance on assistive technology?

For example, do students cite the use of Grammarly or Spellcheck in their works cited?

No, we reply, but these new tools are different.

I would agree. And I personally think a works cited entry is probably the best place to record assistance of this new variety. But what would we include in such a citation?

I suggest:

(1) Introductory reference to agent of change/assistance, i.e. LLM, in question

(2) The company/organization that created and maintains the LLM

(3) The model number of the LLM or date of snapshot of previous version of LLM

(4) A link to transcript of interaction with LLM in relation to document in question

(5) Last date accessed

I am thinking about maybe one more category. This is one we would have to help the students to understand the value of: Categories of Use (Research, Outline, Sentence Generation, Paragraph Generation, Editing, Revising, Proofreading). This way, students take ownership over their process of engagement. (Perhaps, the citation could also be a place where student which paragraphs or sections they targeted most while writing. )The benefit here is that we teachers would get the opportunity to reteach these basic categories but with the glittery value-added of a new technology as context. The trick--perhaps--would be to limit the size of the citation. But of course, we are doing something totally new, and we don't necessary need to limit a generative AI citation to the 2-3 lines traditionally to a works cited citation.

Hope that helps get the conversation going!

Nick at Educating AI

Expand full comment

Nick—thanks for your thoughtful reply. I hear and like what you are saying. One benefit of students submitting all their work via LMS/Google classroom now is that I don’t worry about print length and paper use. What I am thinking is we just start requiring appendices on all student essays with full texts of prompts and GenAI responses.

But surely, APA and MLA must be working on citation format updates for Gen AI, no? And the College Board needs to issue a statement for all their AP courses.

- Max

Expand full comment

You would think APA, MLA, and AP would have all of this ready by now. But where is it? MLA's 1st Working Paper only indicates they are in the very beginning process of thinking through larger ramifications, but I am hoping there is a smaller committee somewhere working through citing particulars. We shall see. Until then, we must improvise.

Expand full comment
Aug 2, 2023Liked by Alberto Romero

One of the most exhausting parts of chatbot enabled or produced papers will be looking for all the hallucinated/delusional claims. I will have to get out the text and find all the quotes that are fabricated, etc. if I want to show the chatbot produced the paper (or even fairly grade the paper since just throwing in some BS like ‘on page 57 the author writes...’ and it’ll be all fabricated.

Almost every encounter I have with chatbot to see what it can do with my research area is peppered with complete fabrications. So it’s easy to ‘detect’ but will be exhausting as I don’t generally factcheck every citation, and read for content. Now I have to be looking at the text and seeing ‘is that quote in there?’ Usually I will be able to tell but...

So I am trying to think of a totally different way to teach. Not so much that I am obsessed with students cheating or catching them --but this is going to drive me crazy.

I can’t tell you how depressing it is for me when students plagiarize...it just crushes me somehow. Like YOU COULD HAVE WRITTEN IT! Why????’ This is like offering candy laced with heroine to a certain kind of student. Also, it creates a horrible narrative where the student might think ‘why do I have to learn how to write now? Machines will do this for me’ without realizing that the point of ‘learning to write’ is ‘learning to think’ and machines don’t do THAT for you, one hopes. And if they DO do that for us, then what is the point of us?

I will figure out a way to do it. If the class is one of those collaborative classes and the vibe is right, I will get them to help me figure it out, and discuss my strategy for chatbot avoidance with them.

Expand full comment

Obviously, students are young with decades ahead of them, so it makes sense to look not only at what's happening now, but to try to look ahead too.

This is only the beginning of these chatbots, and it seems safe to presume they will keep getting better over time. At some point the question of why we need to learn how to write will become more and more valid.

The white collar world is being automated, just like the blue collar factory world was. It probably makes more sense for students to become prompt experts than it does for them to be constructing sentences by hand. You know, in the factory it's those who can operate the robots who still have jobs, not those who work with wrenches.

The white collar world assumed automation was something that happens to somebody else, and so there will likely be a tidal wave of shock for a period of time as white collar workers adjust to the new realities.

At some point, most professors will be replaced too. After all, a great deal of teaching in the universities is already being done by grad students, because they are cheaper. Bit by bit AI will replace both professors and grad students, for the same reason. Universities may be replaced too. Why not?

The upside could be that, over the longer run, this is how college will be made affordable for the masses, because in the future only the college educated will survive.

Expand full comment

You are essentially envisioning a world in which we have a tool for which there is no end user.

If the professors are replaced and the students do not need to learn because the AI will do most of their jobs, then there is no end user for the chatbot. University education is a system whereby understanding is passed between people—professors, research assistants, grad. students. It is not a library or a static holder of knowledge. It is a place where knowledge is produced.

A chatbot is a like a library where the information is combined, dictated to you, etc. Yes, some of this is novel but most of it cannot be because to understand whether it is worthwhile or novel is far beyond the testimony of a chatbot now, and probably forever. This is because knowledge is produced for human needs. So a human is required to asses whether the knowledge matters. A chatbot cannot do this assessment because a chatbot has no human needs.

Chatbots convey knowledge statically. They cannot create knowledge. Universities involve the creation of new understanding, not merely the preservation of old understanding.

Writing is part of thinking. Writing is also the other half of literacy. The chatbot is producing words and ideas. You are envisioning a world in which people are not directed towards creating words and ideas but something else produces the words and ideas. So the training in words and ideas is irrelevant then. But then why produce them?

I suppose it could happen. People could become uninterested in producing their own communication. Communication could be automated. Then communication skills would mostly die. Then the chatbot would hardly be necessary. No one is interested in learning things at that point. The chatbot would mostly do instruction manuals for different tools that we use to live our totally passive existence.

However, I do not think the chatbot will ever produce the kind of communication that humans produce because most of human communication is based on, about and dependent upon human experiences. For example, I do not see how a chatbot could do a news story adequately. News stories require reporting on experiences. Someone has to be there. You cannot draw on pre-existing online information for a news story, because it is something that is happening that must be witnessed and interpreted. You have to actually go and write the news story. Similar things will be true about communication with a business. The people in business communicate their experiences—ones that they have just had. This communication requires an experience and an interpretation of that experience. The chatbot can communicate data but cannot interpret actual events (unless someone feeds all the descriptions of those events into the chatbot, in which case they are doing the writing).

A huge amount of writing is like this, in fact. It is communication from one human being to another about what happened to them—what they saw, what they thought, their interpretation of what happened. The cause and effect relationships they perceived in the world.

One has to be an on-the-ground experiencer to communicate in this way. When we convey information, we are not merely combining past information that already exists. We are creating new information. A chatbot cannot create new information about many different kinds of facts and events—it has to be fed the information by the humans. Then the humans have to interpret it.

There is a way that chatbots could be forced on people and then impoverish their activities significantly. However, they would not *replace* the activities in the form that we need them.

Expand full comment

Great post, thank you. You make all kinds of great points that I can not refute, because like everyone else, I don't really know what is coming, but am just guessing as best I can.

Probably a lot of this depends on how far out we are attempting to look. The next few years is one thing, the next few decades another, the rest of the century something else again. Sometimes I like to think back a century to 1923, and remind myself that the people of that era could often not have speculated usefully about what was to come in the next 100 years. That's probably where we are too, even more so.

You may be confusing today's chatbots, which are still quite primitive, with what is likely coming in some form. As example, I don't really see why an AI powered robot couldn't at some point be an on ground experiencer. You know, today we have drones, tomorrow, who knows?

It's also not entirely clear that the future will be all about we humans and what we want and need etc. That may be looking to the past instead of the future. As example, have you been following the congressional hearings in to UFOs? AI and UFO pilots are two credible contenders for replacing our dominance on this planet.

One thing that seems clear is that those who have invested the most in today's status quo will likely fight tooth and nail to hang on to their status positions. You know, the factory workers tried to fight the robots, but in the end the robots largely won, because they can do the job more efficiently. The Catholic Church which dominated the middle ages tried to hold back the Enlightenment, but in the end the Enlightenment won because it could provide more of what people want.

I dunno. I'm 71. I'm not even entirely sure why I'm interested in all of this, as whatever happens next is not going to include me.

Expand full comment

It definitely depends on the development of the tech. If we get seeing robots and AGI --yes, they could go out and interview people for a news story.

We’re not very close to that right now. If the future is like the past, it will develop...eventually. Lots of hurdles to overcome first though.

LLMs are going to automate a very significant number of businesses, ones we cannot predict. The workers will *have* to step aside.

But human-to-human communication is fundamental to our species. It’s basically our superpower and how we overtook the planet--but planning things with other humans.

I don’t think we have any reason to expect a world in our lifetime or even in our children’s lifetimes where chatbots acquire our full capacities or are the USER of themselves, and humans become irrelevant. We simply don’t have the tech.

We’re going to get pushed into a lot of crap. But humans are mainly interested in other humans and need to communicate with other humans. That won’t change.

Expand full comment

Just for fun, I'd like to push back against this a bit...

"But humans are mainly interested in other humans and need to communicate with other humans. That won’t change."

I would propose that we aren't interested in other humans so much as we are with what we obtain from other humans. I don't just mean goods and services etc, but the psychological benefits too.

Human to human interaction requires a lot of compromise and negotiation, because our fellow humans have needs they want met too. AI doesn't have needs, so we don't need to compromise and negotiate with AI. This is a huge competitive advantage for AI.

We're already seeing sites which offer AI "friends" and the like. Before AI huge numbers of people flocked to the Internet because of the level of control that offered them in their social relationships. Other than my happy marriage, I have largely abandoned real world social interaction for online social interaction over the last 30 years, because here I can do nerd talk all day long, a feature unavailable in the real world.

How much difference is there really between talking with a human online that we know nothing about and will never meet in person, and talking to a bot? We see a big difference today because we're creatures of the past, but will those born in to a world filled with talking bots still see a big difference?

I suspect who we really want to communicate with is whoever, or whatever, will give us the most of whatever it is we want at the lowest possible cost.

Good conversation, thanks!

Expand full comment

Yes, that's possible. It will be interesting to see how it shakes out. I enjoyed talking to you as well!

Expand full comment

Great job, Alberto. You are a fantastic writer. I love how multi-perspectival your prose is--and yet at the same time---so clear and easy to read!

I had several break through moments while reading this series.

As you indicate throughout, the key is to shift the practice. And so many of the changes are relatively easy to do. Why should we clutch to the long-form work-at-home essay like it is the only way to assess writing skills? When did we get locked into this pedagogical approach and why? It seems like the time to do some deep study into the historical, cultural, idealogical, and technological conditions and narratives that motivated this method of production and assessment. If only Foucault was around to do one of his deep archeologies... I guess we will have to attempt one in his absence (RIP). But that will probably have to wait for another day or substack. It seems to me that, perhaps LLMs are the necessary "kick-in-the pants" for educators to reappraise the "fit" of the "long-form essay" assessment model for the contemporary world.

I also like how you mention grading. To me, that is the other "kick-in-the-pants" we educators need. We grade our students to death. And we do so inconsistently, inequitably, inaccurately, etc. Our students are primarily motivated by points, and so when a tool comes along that saves time and energy, are we surprised when they run blindly in its direction? Our grading system has crushed the spirit of learning. Kids running to use ChatGPT is just a symptom of a much larger and older problem.

This year, I am shifting to a standards-based 0-5 grading scale and plan tailor my approach to generative AI to fit inside my more equitable approach. Students will write rough draft in class and by hand unless IEPs direct otherwise in class to achieve level-2. Students will type in rough draft and do initial edits in a single class (I am blessed with 85 minute block periods) in order to achieve level-3. For level-4/5, students will be encouraged to use all the AI tools at their disposal. By that stage in the game, I will have had sufficient time with their writing in a more nascent, unassisted form. I personally believe that seeing student writing--unassisted--will continue to be a crucial part of writing pedagogy particularly in primary and secondary school. That said, I will also have time to work with them in class on how to most productively engage with new technologies as they most certainly will be expected to wherever they land professionally.

I am beginning to work on a larger model for AI-enabled language curriculum on my own fledging substack if anyone is interested. I would love some help for other folks who find themselves stuck in the middle of this predicament. I do believe--with Alberto--that we can work together to find safe and non-invasive solutions to introduction of generative AI into our classrooms and our world more generally.

https://open.substack.com/pub/nickpotkalitsky/p/educating-ai?r=2l25hp&utm_campaign=post&utm_medium=web

Be well,

Nick Potkatlitsky, Ph.D.

Expand full comment
author

Hey Nick, loving your approach. What do you teach and how old are your students if you don't mind me asking?

The only thing I'd add is that I see grading as useful--it's more objective a metric than most others if not all--but shouldn't be an end in itself, only a means for learning. Achieving that ideal is hard because students will unintentionally follow Goodhart's law and optimize the metric as the goal.

Expand full comment

I have taught middle school, high school, and college. Right now, I am teaching 9th grade English, primarily. This is a good question. I think we all should preface our proposals for particular strategies by specifying target age groups or indicating that we are speaking to a more holistic spectrum. A good implicit reminder. Goodhart's law is a very real phenomenon and is lurking around every corner. Be well. Loving your research and writing.

Expand full comment

Great write-up as usual.

I'm a CS professor, which, admittedly is the probably the one field where gen AI is almost certainly a net win out of the box, so I understand my experience won't transfer easily to humanities or social sciences, but I'll share it anyway.

I teach the intro to programming course, and we've always had to deal with cheating, because tools that make it easier to write large chunks of code --unlike tools for the same purpose in general prose-- have been around for decades. So we use a hybrid evaluation system with in-person exams and take-home longer term projects.

The in-person exam is where you test for individual low-level hard skills like actually writing code --i.e., knowing the syntax-- and mastering data structures and algorithms, and perhaps a bit more abstract skills related to problem-solving but in very narrow setups.

The long-term projects test for planning skills, communication, documentation, and capacity to pivot and adapt to changing requirements.

Now here's the kicker, students can cheat in-person but it's much easier to cheat in projects. So we make them present their projects and go very deep into explaining all their design decisions.

I don't really care what they use, as long as they can explain all their process. Code generation tools don't really change anything in this picture for me, it's just another hammer in their toolset. As long as they can stand behind all their design decisions and explain what's this piece of code, down to variable assignment, doing there, I'm fine. My experience is that cheaters are super easy to discover if you ask deep enough in the exposition.

This deep face-to-face evaluation does require a significantly bigger effort from evaluators, but we've already been doing that in CS for years. I understand other disciplines that have relied more on asynchronous evaluation have some reckoning to do.

Expand full comment
author

"So we make them present their projects and go very deep into explaining all their design decisions ... My experience is that cheaters are super easy to discover if you ask deep enough in the exposition."

Loving that. It makes sense and matches my intuition for how it would work. I don't think there's any fundamental difference between what you do and applying it to the humanities.

Expand full comment

Alberto, I am going to use a very played-out phrase here:

This is the way.

Well done.

Expand full comment
Aug 2, 2023Liked by Alberto Romero

An adaptation strategy, in the university and beyond, assumes that humans can adapt to changing conditions as fast or faster than those conditions can be changed. This may be an outdated assumption which arises from the past, when humans were adapting to gradual evolutionary changes in the natural environment, and more slowly changing social conditions in earlier historical eras.

1) If it's true that knowledge development feeds back upon itself leading to an ever accelerating pace of additional knowledge development....

2) And if it's true that human beings have not fundamentally changed in thousands of years, then...

3) The ultimate bankruptcy of adaptation strategies in the 21st century is revealed.

Expand full comment
author

Hey Phil! You say that "human beings have not fundamentally changed in thousands of years." This is true from an evolutionary perspective but not from a cultural, societal or learning perspective. We adapt quite fast to big societal changes--for now much faster than those changes occur. I don't know about the 21st century, though. I don't think we've managed to adapt to social media, for instance. But I'm seeing pushback I didn't just a few years ago. I'm hopeful about generative AI, too.

Expand full comment

Hey Alberto,

Well yes, it does depend on what particular angle one is examining. On the surface cultural level, agree, we're adapting all the time. People no longer yell at me for having long hair, etc. :-) It does depend on what meaning one assigns to "fundamental change".

I mean things like this. Chimps are obsessed with defending and expanding their tribe and their territory, and routinely get in to conflicts with other tribes of chimps regarding the boundaries of territories. (see Chimp Empire on Netflix) We're still doing the same thing today in Ukraine and many other places. Behaviors that have been with us since before we were human are unlikely to be edited.

If true, then as AI further emerges it seems pretty much certain that it will be put in service of these ancient conflict agendas, which all evidence suggests are beyond our control.

If it's true these ancient conflict agendas are beyond our control, then why am I still writing about them???

Ah, wait, I've got it! To create more conflict!!! :-)

Expand full comment

Most teachers would know what to do. The prerequisite is that teachers have the flexibility and autonomy in the system to change. This is not the case. The system has tremendous inertia due to centralisation. It is a matter of policy changes most teachers can't make which would free them up to act. Teaching needs to be decentralised. Faculty needs to be trusted. Currently most institutes are sitting ducks when it comes to AI impact as faculty/teachers' hands are tied when it comes to radical and flexible change, which is required. The covid pandemic had most institutes simply force through business as usual. Students are still recovering today. AI is far less threatening. I expect very little change in the short term.

Expand full comment
author

Hi Michel.

"The prerequisite is that teachers have the flexibility and autonomy in the system to change." I agree with you that this is key. I didn't touch on that because I was focusing on individual actions teachers can take. I believe that although people often know what they should do (the recipe for something) they don't do it for other reasons. It's those reasons I was trying to go into here (e.g., it takes more effort, fear of--and resistance to--change, pessimism, lack of understanding of the technology, etc.).

I deeply agree with you anyway. I think the largest improvement can only be done as you say: collective action and systemic changes to the educational system. I don't believe I'm the person to write about that, though, and readers wouldn't take away much practical value from that!

Expand full comment

I will share my thoughts on academia in the ShadowLands on substack one day. The current topic is the key to a great education (the infinity mirror). But to implement this other ingredients are needed.

Expand full comment
Aug 3, 2023Liked by Alberto Romero

Hola Alberto. Excelente boletín. Independientemente de que los profesores quieran o no la IA, estas herramientas llegaron para quedarse y tentaran a los estudiantes a utilizarla, ya sea con autorización o sin ella. Y aquí las instituciones educativas y los profesores deben desarrollar la mejor estrategia posible como lo explicas en este boletín, incluyendo la IA de alguno u otra manera. No sé si te apoyaste en Ethan Mollick para el desarrollo para este boletín, gracias a ti vengo leyendo hace rato al profesor y me parece de modo personal que va por buen camino como nos enseña que si se puede avanzar de manera positiva en la educación, incluyendo la IA. Gracias.

Expand full comment
author

Hola! Considero a Mollick un buen ejemplo de pionero en adaptar la forma de educar a la AI generativa, sin duda. No estoy siempre de acuerdo con el, pues muestra un optimismo desmesurado, pero creo que es valioso conocer ese tipo de perfiles que abracan sin reparos las nuevas tecnologias. Ya luego cada cual puede sacar sus conclusiones!

Expand full comment