Great article, thanks ! I was surprised insofar that I had red quite a number of articles on ChatGPT - but all up to now failed to see the challenge that large, foundational language models pose to Google.
But this translates to another challenge for these models that I have seen addressed frighteningly little so far: advertising !
How …
Great article, thanks ! I was surprised insofar that I had red quite a number of articles on ChatGPT - but all up to now failed to see the challenge that large, foundational language models pose to Google.
But this translates to another challenge for these models that I have seen addressed frighteningly little so far: advertising !
How has Google become big (…you just mentioned that „they have the budget“ - but it wasn‘t mentioned why) ? Not by serving the best search results ! But by understanding how to sell advertising injected into search results and monetize this really well.
I am sure that, behind the closed doors, there‘s more than one big internet or advertising (…or both) company trying to figure out how to inject advertising into the model response that make somewhat sense in the context of the given prompt.
Whoever gets there first has good chances to create the next de-facto monopoly: with the snowballing effect of incoming advertising cash fueling quicker scaling and better results (…just like Google snowballed into becoming the internet search standard).
And that gives me more the creeps than fears of GAI for the moment: because, like shown in so many examples: it will become increasingly difficult to distinguish between not only „true“ reality and „model“ reality - but there will be the additional question how much of the „model“ reality is pure, uninterested model output and how much serves the self-serving interest of the model builder to monetize my need to get answers on question.
There will certainly be no clear distinction build in from the start (like the ability to distinguish between bought and generic search results in Google) - because it is against the very interest of those trying to build the technology. And truth be told, „lazy“ consumer behaviour will also play a role: a „differentiated“ output my be less comfortable to interprete than a seemingly single, consistent answer that does not challenge the illusion of trustworthyness by creating cognitive noise, giving hints on the kind of manipulation has intervened in the background to produce it.
Great article, thanks ! I was surprised insofar that I had red quite a number of articles on ChatGPT - but all up to now failed to see the challenge that large, foundational language models pose to Google.
But this translates to another challenge for these models that I have seen addressed frighteningly little so far: advertising !
How has Google become big (…you just mentioned that „they have the budget“ - but it wasn‘t mentioned why) ? Not by serving the best search results ! But by understanding how to sell advertising injected into search results and monetize this really well.
I am sure that, behind the closed doors, there‘s more than one big internet or advertising (…or both) company trying to figure out how to inject advertising into the model response that make somewhat sense in the context of the given prompt.
Whoever gets there first has good chances to create the next de-facto monopoly: with the snowballing effect of incoming advertising cash fueling quicker scaling and better results (…just like Google snowballed into becoming the internet search standard).
And that gives me more the creeps than fears of GAI for the moment: because, like shown in so many examples: it will become increasingly difficult to distinguish between not only „true“ reality and „model“ reality - but there will be the additional question how much of the „model“ reality is pure, uninterested model output and how much serves the self-serving interest of the model builder to monetize my need to get answers on question.
There will certainly be no clear distinction build in from the start (like the ability to distinguish between bought and generic search results in Google) - because it is against the very interest of those trying to build the technology. And truth be told, „lazy“ consumer behaviour will also play a role: a „differentiated“ output my be less comfortable to interprete than a seemingly single, consistent answer that does not challenge the illusion of trustworthyness by creating cognitive noise, giving hints on the kind of manipulation has intervened in the background to produce it.