64 Comments
Feb 3, 2023Liked by Alberto Romero

Is ChatGPT and similar a psychological, emotional help to lonely people? And if so, will more interaction between the lonely people and the ChatGPT lead to despair?

Expand full comment
author

I wrote my thoughts on that exact topic here: https://thealgorithmicbridge.substack.com/p/how-ai-can-help-with-the-loneliness

To summarize, I think there may be benefits and downsides but, for now, downsides surpass benefits. ChatGPT is an interesting research tool but nowhere near being capable enough to solve such a complex problem. Also, as I argue in the essay, I think social problems require--at least partially--social solutions. Technology may help but won't be the primary means to solve it.

Expand full comment
Feb 4, 2023Liked by Alberto Romero

One way to approach this question could be to ask...

Is our online engagement with a random selection of total strangers on the Internet leading to despair? If these strangers turned out to be bots, would we really care so long as we were having the experiences we seek?

Expand full comment
author

Yep, but this assumes current chatbots can *perfectly* pretend to be a person. Although we're quite bad at distinguishing both, their flaws give it away sooner than later.

But, if technology improved to the point that a chatbot could simulate a human, would that be good or bad, would it increase people's wellbeing? Great question.

Expand full comment

My guess is that chatbots won't have to perfectly simulate a human. It may be more important that they excel at giving us whatever it is we want. If they do that well enough, we may meet them half way and not be as fussy about the simulating a human part. As example....

The Internet is full of we imperfect humans. And so where ever we go, and what we wish to discuss, it's going to be an imperfect conversation. To the degree a chatbot is willing and able to address our particular interests, and do so in a manner more interesting and knowledgable than humans, the humans may not be able to successfully compete for our attention.

My guess is that people in today's world would find ongoing chat with a bot to be weird. But people born in to the coming world where highly personalized bots are everywhere would probably consider chatting with bots completely normal.

To see where we're going we may need only look at what we're doing today. Many of us have traded real world friendships for online connections because we have a great deal more control over the online connections. You know, if I want to talk AI all I need to do is click in to this blog. In the real world talking AI is a lot more complicated and a lot less convenient.

We might ask, what is the difference really between a bot, and a total stranger whom we'll never meet?

Expand full comment
author

I see your point and it works if what you want from the chatbot is utility. It wouldn't work to tackle loneliness, for instance, as Tatiana said. But I agree with you, Phil, that bots don't need to be imperfect the way humans are, and in that sense, they'll eventually be much more useful for many tasks.

Expand full comment

If connecting with random human strangers on the Internet works at combating loneliness today, why wouldn't connecting with bots also be willfully engaged by many tomorrow? Probably not by us today, but by those coming generations born in to world where talking with bots is considered normal.

Expand full comment

There is no one-size-fits all answer to this question. Every one of us sits on a spectrum of -- what shall I call it -- machine tolerance. On one end are people who have happily participated for years in a remote relationship that they got a lot of benefit from but who would drop it instantly if they found out it was a program; on the other end are people who prefer machines because, for one thing, you never have to worry about what a machine thinks about you. And we all move back and forth on this spectrum over our lives. (According to Fast Company, half the population says: "“I would be hurt or offended if I found out my Valentine’s message was written by a machine / artificial intelligence.” So. Half felt one way, half the other way.

Bottom line: we should never close any doors. And we should avoid making generalizations that purport to encompass everybody all the time.

The only common element is that wherever you on the spectrum you need to be told right up front who or what you are dealing with. Both ends would want that.

Expand full comment

Will European countries drop out of the AI technology improvement processes while they search for a way to apply some sort of regulation on their local industry looking to work with AI technologies? What do you expect will be their ability to offer competitive products and services while they are studying how they want to regulate it in Europe?

Expand full comment
Feb 4, 2023Liked by Alberto Romero

That's a great question. Alberto, what would be your advice to European policy makers to support the development of AI in a timely manner and at the same time protect people from possible changes, for example in the labor market? Do you think these are generally the right priorities, or would it be more appropriate to focus on other aspects of AI?

Thank you very much for your articles.

Expand full comment
Feb 4, 2023Liked by Alberto Romero

Protecting people from AI is a fool's errand, because AI, the Internet, and the human condition are global phenomenas beyond the control of any government or panel of experts. Evidence; We can't even control reckless driving.

Expand full comment

There may be an important cultural difference between the US views and European views. In the US we expect people to take a reasonable level of responsibility for their actions and welcome individual creativity and experimentation. We know that there will be issues and expect them to be fixed with incremental improvements. In Europe there seems to be more of a culture where people expect/demand the governments protect them from bad comes. The community is looking for "near perfect" solutions and is less accepting of risks, things breaking and then being fixed with incremental improvements. That needs for things to be all worked out is fine, but it means Europe will not be able to keep up with rapid innovations coming from more "experimental clutures" like the US.

Expand full comment
author

Also, this idea of the culture differences between US and Europe is a very interesting topic to explore in a future essay. I'll keep it in mind!

Expand full comment
author

Hi John, great question and great response to your question! I agree pretty much with everything you said. I'm European but in close contact with the US way of life and I recognize the cultural contrast you mention.

Simplifying, I'd say Europe prioritizes the general well-being of the people whereas the US prioritizes progress. When those two things don't go hand-by-hand (and it happens a lot with emerging technologies), the contrast appears much more evident. AI is possibly the most salient example right now.

Answering your question, I don't think Europe will be able to compete against the US or China if they establish strict regulations--and, even if that's true, I still think that's the better approach. It's simply not the most competitive given the current socioeconomic global framework.

Expand full comment

I agree on the culture differences between Europe and America, but I'm less sure that it matters in this case, because AI will be global, like the Net. As example, it's not going to be that easy to keep AI bots coded in North Korea from having influence in both Europe and America, and vice versa.

Expand full comment
author

AI will be global but governed by different sets of regulations. If Europe restricts the use (and research and deployment) of AI more than the US, we may get behind (we are already by a lot).

Expand full comment

But in order to regulate AI we would have to have leverage over those creating AI. On the global stage that's not possible. Who in the US or Europe is going to regulate the Chinese Communist Party, leaders of the world's largest nation?

Even within national boundaries effective regulation of anything is not at all reliable. Law breaking is rampant at every level of society. While it's always been that way, as the scale of available technologies grows, so does the threat from those unwilling to submit to regulation.

Expand full comment
author

Thanks Nick, appreciate it! See my response to John's questions below.

Expand full comment
Feb 3, 2023Liked by Alberto Romero

If you could create an robot/AI for any task, small or large (answering your phone and creating calendar events/clothes-shopping accurately for you) what task would you automate?

Expand full comment
author

Great question Anne. Despite my involvement with AI, I've never given this possibility much thought. Maybe I know too well where and how it fails that I wouldn't trust it for anything too important or too complex. The Roomba is probably one of the best applications: safe, useful, simple...

If I could somehow be sure that the system shows an acceptable degree of reliability and skill, probably a virtual assistant like Iron Man's Jarvis would be super awesome.

What about you?

Expand full comment

Hi My Name is Claire, I own creative small business. My mission this year is to help creative to stay informed about the advancement of generative Ai including GPT 3 ChatGPT, as well as generative Images, audio, video tools etc. My concern right now mostly people are actually learning and adopting are from the tech world of the bubble. I’ve feeling the a little pushback from my creative community. and feeling kind alone in my mission that I can’t find an other person who’s from creative community is passionate about keep creatives up to date so we can stay competitive for the future of the creative industry. How should I approach this, and what kinda content should I be sharing and creating that will help me in this journey.

I already started post on my Instagram and TikTok pages, and I put together a list of resource, people tools to follow on motion called The Ai Stuff Creator and Artist Should know.

What are the most effect way that I help to spread out the word?

Expand full comment
author

Hi Claire, I think Instagram and TikTok are pretty good for distributing content. I'm not in those apps so I can't really help you there, but as long as you keep at it consistently, it'll pay off. The bubble will pass so it's good that creators like you share content that's honest and not solely intended for clicks. That's what I try to do and I think it's the best approach if you want to give long-lasting value to your audience--instead of using them to ride the current wave just to leave them once this one fades.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023Liked by Alberto Romero

Many technologies, possibly most, define themselves around one core application. Generative AI has certainly avoided this temptation. Perhaps it will continue to do so. If it does not I have a suggestion. .I think in ten years we will see that the dominant application of the technology will be in allowing humans to have robust, fully interactive, conversations with machines. Machines of all sorts and types and functions. To take an example, perhaps an extreme one, imagine being able to talk with your house about your needs and imagine that it had useful reactions. It will be like having one of those old family servants known to us from the writings of Dickens.

Expand full comment
author

Yes, I agree that's one of the most notable applications, if not the most, that language models can offer. That's why ChatGPT became the undisputable winner of generative AI so far in terms of interest, attention, and usage.

Expand full comment
Feb 4, 2023Liked by Alberto Romero

I am currently writing a series of articles about AI for a Danish healthcare research media. One of them digs into the hype vs. reality question since there have been a lot of predictions about the stunning things AI can do for the health care sector. However, in practice the field has a hard time living up to these promises. Research studies showing great results rarely works all that well in reality. Is this a general problem for AI or is it especially outspoken within healthcare research?

Expand full comment
author

Hi Thomas, great question. It's definitely a general problem for AI. What makes healthcare a special case is that it's a high-stakes category. You can't simply release a medical ChatGPT and let users figure out the flaws of the system while it kills people indirectly with bad advice.

The fact that ChatGPT has blurred the line between what's hype and what's real also in healthcare is super worrisome (maybe you saw that it passed the USMLE recently and served as inspiration for a new feature of Glass Health, GlassAI, which supposedly can make differential diagnosis).

Expand full comment
Feb 5, 2023Liked by Alberto Romero

Really good and simple answer, I might quote you on that if ok - thanks! I saw the story about USMLE and found it mildly worrisome. You probably read about Googles / DeepMind’s MedPaLM too.

Expand full comment
author

That's ok! If you need more, email me at alber.romgar at gmail dot com

Expand full comment
Feb 4, 2023Liked by Alberto Romero

Congrats on the 8,000 subs Alberto! Way to go. Jealous!!

Here's a menu of questions to give you some choices. Answer any that interest you.

QUESTION 1: What are the compelling benefits of AI which justify introducing what could be another existential risk? Or, if you prefer, what are the compelling benefits of AI which justify providing psychopaths like Putin with even more power?

QUESTION 2: Is the marriage between violent men and an accelerating knowledge explosion sustainable?

QUESTION 3: Why do I have literally 1,000 times fewer subscribers than you? :-)

Expand full comment
author

Thanks Phil!! Started with 10x less than half a year ago so it's definitely possible to achieve!

Your question 1 is the central theme of (potentially) one of my next essays. Very interesting question without a satisfactory answer: Why are we working on AI (arguably toward AGI) if it entails an existential risk? Why not stop all efforts right away?

Your second one is also very interesting. Is violence compatible with superpowerful tech? I think we already know the answer to this. AI may take it to the next level.

For your third question, I'd say consistency and luck are the main factors. And the first prepares you to catch the second when it passes!

Expand full comment
Feb 5, 2023Liked by Alberto Romero

Thanks Alberto, consistency and luck, sounds right. I'm only 2 months in, so I need way more consistency.

One of the obstacles I see with #1 is that we look to experts for answers to such questions, and the experts have made a huge personal investment in becoming AI experts. And so, they're no longer fully objective observers of AI. You know, what expert in any field is going to argue that their field should abolished?

As to questions #1 and #2 (especially 2) I'm working up one possible answer to that on my blog. If interested, see the world peace section. This may also answer #3 to a degree as I, um, seem to specialize in unpopular topics.

Expand full comment
author

"the experts have made a huge personal investment in becoming AI experts. And so, they're no longer fully objective observers of AI" this idea is worth exploring further. One of the most notable examples of this for me is Yann LeCun. He has incentives beyond looking for the truth to keep going a given direction.

But I think that's just one part of the problem. Some people are invested in a way that's even harder to change. They've put their identities in what it means to be an AI insider. They want to be the ones who see it coming first. It's a mix of ambition and curiosity. I'll reflect more on this one...

Choosing popular topics is also part of the equation, for sure!!

Expand full comment

Hi Alberto, thanks for your reply.

It's not just AI experts who suffer from this bias, but the entire intellectual elite class. When safe conventional thinking can solve a problem experts are useful. But when safe conventional thinking has proven to be a longstanding failure (such as with nukes) the experts seem unable to explore out of the box unconventional ideas.

Yes, you've been wise in choosing a topic which is currently very hot. I applaud your business skill here, and expect you to be rewarded for it.

I get the concept, but my mind seems determined to be writing that which readers don't want to hear, because so often what they do wish to hear doesn't work. As example, I'm currently working on an article series about a path to world peace. I can make a rational case for my plan like a good attorney, but um, this plan is going to be really unpopular. Such intellectual challenges fascinate me, but they aren't a path to business success.

Expand full comment
Feb 4, 2023Liked by Alberto Romero

I am personally quite unimpressed with ChatGPT, when its results are compared to what a human being can produce. However, it is understandable that educational institutions are worried about its possible misuses, particularly in the context of high-school assignments, which ChatGPT can easily reproduce (since all they require is the summarizing and regurgitating of widely unambiguously accepted knowledge).

I have a few questions regarding these issues, they are all quite general:

1) Could you comment on strategies which could employ tools such as ChatGPT for enhancing education, rather than seeing it as a danger to education?

2) Could you comment on issues of style (e.g. the fact that in e.g. academia a particular kind of "disengaged" style is promoted, while in fact what will ultimately distinguish ChatGPT products from human products is precisely a question of 'unique' style)?

3) It is likely that, because of the willing adoption of tools such as ChatGPT, the production of text will drastically increase (making a lot of journalistic/editorial/etc. work obsolete), but because of this it is also likely that the need for curatorial and summarizing services will increase as well. What this means, abstractly, is that there will be massive amounts of texts being produced from "basic units" of info (e.g. "write a friendly email to my coworkers that I will not make the deadline"), which at the same time, on the other end, become summarized back into these "basic units" because people don't have time to parse through vast seas of "useless" grammatization (e.g.: "write me a summary of all the incoming email regarding the project deadline"). This leaves me wondering, not whether there is any objective reality to the "basic units" (which I don't believe there is), but where the belief in an objectively summarizable (and, at the same time, expandable yet expendable) reality comes from. Perhaps the question is even this one: where does the biased focus on language as a tool come from? (Perhaps it is a vestige of capitalist product-oriented obsessions?) Why, when we are impressed by the poetic abilities of ChatGPT, can we not also appreciate language in general as the basic social fabric that holds our relational dynamics together? The "bias" discussion on the partiality of systems seems not to have taken actual critical hold, because we continue to promote systems which supposedly parse reality in "neutral" and unbiased ways. This last comment/question is quite abstract, but I hope you're able to comment on it.

Thanks!

Expand full comment
author

I agree with your initial remark. I also understand that most people who have tried chatgpt did it without any previous contact with modern AI. That has to be impactful.

1. I've thought about this since chatgpt came out. I'd say we have to do two things: First, learn and teach about what chatgpt is and what it can (and can't) do. It's important to not think it can do anything text-related at the same level of performance. Second, within the boundaries of what it can do, it should be integrated into the normal workflow only after the student has learned to do the things that ChatGPT will do for them (pretty much like we do with calculators today). These aren't super concrete measures, more like general guidelines, but it's a starting point.

2. One possible (hopefully far away) outcome is that "human styles" will become like a premium luxury kind of service. AIs would take the role that industrially manufactured goods took a hundred years ago. They'll cover most of the demand and humans will work for the highest class--those who can pay the cost. We will adapt to this. In the case of academia, maybe papers will become more readable (more "stylish"), maybe AI will play a big part in writing them (once it's reliable), or something else.

3. I think I get your question, but I'd love it if you could rephrase it! I think it all comes down to the purposes of language. One interesting consequence of chatgpt is that we're thinking much more about language, how it works, and how we use it.

Expand full comment
Feb 4, 2023Liked by Alberto Romero

Hello Alberto

1) Have you heard of Rain neuromorphics?

Recently, I have become interested in neuromorphic chips. If systems like ChatGPT have caused so much horror and admiration, then what will happen when neural networks on neuromorphic chips appear? Here to find out what you think about this topic?

2) What do you think about the "forward forward" algorithm?

3) What do you think about the possible appearance of a "useless class"? People will always be needed, but in the near future probably only certain ones, but which ones do you think?

Expand full comment
author

Hey Tim, yes I've heard of them, interesting approach to AI! I think that, for now, neuromorphic chips will remain a niche area of research. It's the middle point between current silicon and the brain, and although promising in some aspects, many people in AI dismiss the idea that we should get closer to how the brain works or how it's "designed" by evolution.

2. I've skimmed Hinton's paper but don't know enough about the algorithm itself to comment on it. But just because it's Hinton's idea I think it'd be worth considering! The DL pioneers seem to be thinking very out of the box lately--LeCun and his proposal for how to build HLAI and Bengio and systems 1/2...

3. Haven't heard of the concept before... If you're referring to people whose work will be completely replaced by AI/robots, I think UBI solves the financial aspect. We'll have to find ways to solve the "meaning" aspect beyond working for a salary. The ones I'd say won't be replaced are those whose work requires necessarily a human component, e.g. Politicians, leaders, nurses and doctors, therapists, teachers, educators, and caretakers... Robots may play a role but not replace. I don't see a hard physical limit that could prevent robots from dominating, given enough time, all the other professions.

Expand full comment

I also think that the solution is UBI because I can't think of another solution. But I have two questions:

1) Let's say in all countries the unemployment rate will be at least 20% will the state really give them all UBI knowing that most likely they will never work and will rather be eternal parasites. Let's say a rich man has a fully automated factory, but most of it goes to people with UBI. In your opinion, an economy where most of it is automated and consumers have no work and therefore no savings for purchases, how will such an economy work?

2) is it possible that with the greater development of AI, the world will become more socialism when your life completely depends on the state?

I'm sorry if it's unclear, I just couldn't figure out how to formulate it correctly.

Expand full comment

When will ChatGPT be able to do my laundry?

Expand full comment
author

Lol, I'd use it gladly!

Expand full comment

Just kidding. I’m curious to know if you read science fiction or fiction in general and what your favorite books are—fiction or non-fiction?

Maybe you’ve covered this in prior posts before I joined, not sure, but always curious about what books writers are reading and loving.

Expand full comment
author

Great question!

My favorite sci-fi is Asimov's Foundation trilogy. And I want to start with the Culture series! (I'm reading Sanderson right now). I'd also like to go more in-depth with the classics.

Non-fiction, I love to read psychology/sociology/anthropology and philosophy. I'm currently reading like 10 books at the same time...which may not be the best approach lol

On the AI side, I recommend Gary Marcus' rebooting AI, which I've read recently (although I've covered the ideas in the book a lot in my articles).

Any recommendations are welcome!

Expand full comment

Awesome! I actually wrote about one of the Culture Series books, Surface Detail, here — https://open.substack.com/pub/charlottedune/p/meta-hell-and-vr-underworlds?r=8gb2e&utm_medium=ios&utm_campaign=post

I personally love the Three Body Problem series and also everything by Philip K Dick or Ursula K LeGuin.

But I’m currently reading Demons by Dostoevsky, which is disturbingly relevant to present day.

Plus I’ve been reading the Bible all the way through for like a year, but just for 10 min a day— as a literary pursuit.

Also reading Stolen Focus by Johan Hari, and Rick Rubin’s new book. (Stolen Focus, I highly recommend and has a bit on Ai.)

I usually have three books on the go— one audio, one non-fiction, and one novel.

Expand full comment
Feb 3, 2023Liked by Alberto Romero

What is this biggest misconception about AI?

Expand full comment
author

Interesting question! I'd say there are *a lot*.

If you ask about AI, in general, I'd say the very concept of "AI" is the biggest. Most people associate it with popular cultural influences like the terminator or a space odyssey but it's nothing of the sort. The term AI was chosen back in 1956 partly as a means to distance the field from the influence of Norbert Wiener, the father of cybernetics. It was more a political move than a scientific one.

Current "AI" systems aren't AI (as the concept is commonly understood) and are nowhere close to being. That's why people have invented other terms like AGI, HLAI, strong AI, true AI...

Expand full comment

What are some mind-blowing capabilities you think a multi-modal gpt-4 will open up? How quickly/competently do you see Google responding to OpenAI/Microsoft with their own products?

Expand full comment
author

Imagine if you could do with any modality what chatgpt allows you to do with text. The possibilities are vast. I want to explore more the multimodality branch of AI research and write a longer post on that. An example: prompt the model, ask it to come up with a story, make it draw characters and the setting, and then animate it into a short.

I think Google will react (is reacting) fast and will reconsider its values and principles. Survival is a priority for the company. Whether they'll succeed is yet to be seen. But I don't think OpenAI can really be a match for Google if the search giant doesn't want to. The recent partnership with Anthropic is a hint of what's coming.

Expand full comment
Feb 3, 2023·edited Feb 4, 2023Liked by Alberto Romero

Hi Alberto,

Thanks for your work. It's really helpful because it's well informed, well sourced and nuanced. I also follow G. Marcus on Substack.

I'm a French law librarian researcher and a lawyer by education and I monitor developments of the use of AI (machine learning, LLM, expert systems) for law.

I wonder if you have feedbacks and info from the legal community on ChatGPT, GPT and the like.

My two questions are :

- do you think generative IAI's hallucinations can be reduced to a point where they won't matter to professionals? In law, that's a key question

- are there any signs that LLM will be able to reason? Legal reasoning is done with words, I know, so in theory do you think it could be done?

My (long) post (in French) on the subject is here (and on the ChatGPT matter it owes you some) : https://www.precisement.org/blog/Intelligence-artificielle-en-droit-derriere-la-hype-la-realite.html

Expand full comment
author

Appreciate it a lot :)

I don't have that feedback, so I can only answer from what I think!

1. Important question. As they stand now, language models won't stop hallucinating. They'd require a radical change in how they're built and trained. You can try to reduce unreliability with patches like RLHF but it won't get the models to a safe enough point from the perspective of areas like law or healthcare. These systems hallucinate because they're trained to generate bullshit in the sense that they *don't care* about truth. Truthfulness is simply not a variable that ChatGPT considers (RLHF helps but it's not enough). Depending on the purpose of communication, truthfulness doesn't play a role--and AI systems simply don't have a purpose for what they output, so truthfulness should be artificially added (it isn't).

2. Making AI systems that reason is hard. Harder, I'd say, than making them truthful. People are working on that but it's not going to happen soon.

Expand full comment

Hello Alberto. I'm testing an AI interface with GPT-4 API and a quality professional database behind and I can assure you that by retraining and restricting the thing on this database, it does not hallucinate. It seems the system checks itself its answers before displaying them.

Expand full comment

Hi. Sorry for being late to this ama. But I have a few questions. 1. With regards to having a foundational understanding of AI, how crucial it is to have good knowledge of statistic. 2. With regards, to many generative AI applications. Do you think these guys train the model from scratch or they start with some pre-trained model.3. Lastly, with regards to combining ChatGPT or LLM in general with search algorithms, what do you think is the approach taken by Google and Microsoft. Do they create a new models that incorporate both LLMs and search algorithms, and in general , does mixing this two affect the accuracy and relevancy of the search results

Expand full comment

Greg as I understand nothing stops you for using online sources but there are likely guidelines on how you give credit to the sources you used. There is also a really important role that you’re always responsible for what is said and the quality/accuracy of what you write under your name as author. So keep that in mind with any sources you use to make sure you are as accurate as possible! Don’t ever count on a single source to be always correct.

Expand full comment

A.I. Can help find truth. That saves energy. Test internal reality against A.I. statements of fact. Great new future for machine truth.

Expand full comment

What are the practical implications of the difference between using ChatGPT as tool vs playing with it as a partner?

Expand full comment
author

Hey George, what do you mean with "playing with it as a partner"?

Expand full comment
Feb 6, 2023·edited Feb 6, 2023

Thank you for asking. A tool is an inanimate object. I just asked ChatGPT what it "thinks" about being a partner.

Me:

What can AI be as a partner, as opposed to as a mere tool, in a collaborative sensing, meaning-making and choice-making process?

ChatGPT:

AI can be a partner in a collaborative sensing, meaning-making and choice-making process by incorporating human values, emotions, and context into its decision-making process. It can also provide insights and options to humans based on its vast data analysis and pattern recognition capabilities. However, it's important to note that AI should not replace human decision-making, but rather enhance and support it, while being transparent and accountable in its actions. Ultimately, the goal should be a balanced partnership where AI and humans work together to achieve a common goal

Expand full comment

What sort of AI model is best suited to track and predict human behaviour?

Expand full comment
author

This question is too general, really! I don't think any AI model (or otherwise) can predict human behavior. I'd say human behavior is one of those processes that are computationally irreducible, i.e. you can't predict it with a model simpler than the thing itself.

Expand full comment

How do or do ChatGPT and its like affect machine translation? Any hope for them as editors?

Expand full comment

How will universities combat the use of chatgpt?

Expand full comment

Greg - Another consideration is that AI technology with Chat interfaces will become widespread. Google and Microsoft are likely to be adding these technologies to their consumer facing services and products very soon. So, all education efforts will need to learn to adapt to these technologies being widely available.

Expand full comment

Yes, but if i'm a student what stops me from ChatGPT from writing my papers?

Expand full comment

Hello Greg. Isn't the real question rather like: how will they adapt?

Expand full comment