4 Comments

Very interesting, thanks, appreciate the education on this topic. It's all news to me. I tried to sign up to the demo, but when they demanded my phone number I backed out. Still curious though, so I may give in.

You write...

"But, what will happen if, despite numerous attempts to make an AI model break character, beat its filters, and make it drop its facade of reasoning, people fail to do so?"

Shouldn't we replace the word "if" with the word "when"? That quibble aside, this is the kind of question that interests me.

1) Where is all this going?

2) Do we want to go there?

3) If not, what can we do about it?

There's a lot of speculation about where this is going, but so far I don't see much discussion of #2 and #3. What do you see?

---------

Here's a related topic which perhaps you might be able to comment on?

I have some software (CrazyTalk) which allows you to animate a face photo with an audio file. The end product is a video of a face photo that talks. It's not interactive, you can't have a conversation, only create a video of a face that speaks your script.

Is anybody combining GPT with animated face photos? You know, so instead of a purely text conversation, you could interactively converse with an animated photo of a human?

If such a service were available it seems it would considerably deepen the illusion of consciousness, at least for we in the general public.

Expand full comment

One of the best article titles I ve seen in 2023

Expand full comment

I can't see Chat GPT killing off Google Search but it should wake them out of their decades long slumber. (Evidently it has as it has been given some sort of "Code Red" status within Google.) I've always felt that Google could make search much better but choose not to due to not wanting it to consume more computer resources and/or not want people to get their answers too quickly so they get a chance to look at some ads. If they are scared by the thought of some other company doing better, they may have to adjust the balance point between performance and profit.

Using ChatGPT as currently configured as a search engine isn't compelling. I don't want my search results in the form of a story. If I want an essay on some subject, I'll go to Wikipedia or some other appropriate source. ChatGPT makes way too many mistakes for me to trust it on facts.

The test for me is whether I can search for a restaurant's website using it. Sometimes I am not interested in its Yelp page or its appearance in someone's scathing review. Seems to me that this is easily within Google's grasp but they just don't want to make things this easy. AI could help with this kind of thing. It doesn't even have to always be right. Such features would have very little to do with ChatGPT though.

Expand full comment

Great article, thanks ! I was surprised insofar that I had red quite a number of articles on ChatGPT - but all up to now failed to see the challenge that large, foundational language models pose to Google.

But this translates to another challenge for these models that I have seen addressed frighteningly little so far: advertising !

How has Google become big (…you just mentioned that „they have the budget“ - but it wasn‘t mentioned why) ? Not by serving the best search results ! But by understanding how to sell advertising injected into search results and monetize this really well.

I am sure that, behind the closed doors, there‘s more than one big internet or advertising (…or both) company trying to figure out how to inject advertising into the model response that make somewhat sense in the context of the given prompt.

Whoever gets there first has good chances to create the next de-facto monopoly: with the snowballing effect of incoming advertising cash fueling quicker scaling and better results (…just like Google snowballed into becoming the internet search standard).

And that gives me more the creeps than fears of GAI for the moment: because, like shown in so many examples: it will become increasingly difficult to distinguish between not only „true“ reality and „model“ reality - but there will be the additional question how much of the „model“ reality is pure, uninterested model output and how much serves the self-serving interest of the model builder to monetize my need to get answers on question.

There will certainly be no clear distinction build in from the start (like the ability to distinguish between bought and generic search results in Google) - because it is against the very interest of those trying to build the technology. And truth be told, „lazy“ consumer behaviour will also play a role: a „differentiated“ output my be less comfortable to interprete than a seemingly single, consistent answer that does not challenge the illusion of trustworthyness by creating cognitive noise, giving hints on the kind of manipulation has intervened in the background to produce it.

Expand full comment