This is like watching an implosion occur in real time.
The fact that average can't see its use or value has been what was expected the entire time. It's there to replace average. Its the same as when mass production replaced all the artisans Or all the assemby jobs for shipped over seas and the service industry replaced the manual jobs.
You have to be a creative to realize it's value. You have to be an adhder, big picture creative. Then its the greatest gift you have ever been given.
ChatGPT says it cannot cite any of the data upon which it has trained. Apparently, its developers didn’t feel the need to credit the original source of information or otherwise prove what it is saying is true. Since citations of purloined source material could be used in copyright infringement cases, I suspect its inability is a feature not a bug.
ChatGPT wasn't coded as a traditional computer program. It's not easy for the devs to make it cite appropriately. That doesn't change the fact that it should, of course.
This is something I keep trying to tell people: "Writers and amateurs find ChatGPT mediocre at best (they remain hopeful of future versions, but I have my doubts; averageness is imbued into GPT’s design)." It's not generative, it's derivative.
The reality is starting to settle: Generative models are a series of tools among many that can accomplish specific tasks within specific constraints.
That said, my experience is that the infrastructure needed to get the most out of genAI is not yet ready at most organizations. For now, we can only focus on the low-hanging fruits. In a few years, we might expect more fundamental changes to happen (though probably nothing magical). Change will come slowly but surely.
Agreed. AI only needs us to temper our outsized expectations. I believe we're on the cusp of the field's golden age. I'm writing a long post on this, taking a more optimistic stance.
The Pew Research survey is illuminating indeed. Thanks for sharing. And we should keep in mind that responses to the question are from the early adopters who actually paid (presumably) to use the most advanced models. In reference to the technology/innovation adoption model, if the innovators/early adopters are themselves feeling somewhat underwhelmed, it makes me wonder if later adopters (early majority) will be enticed to bother. Time will tell.
Aside from dealing with hallucinations, I feel like AI is currently in a stagnant phase, where companies are pushing as many AI products as possible to the market. It’s no longer making an impact, and the ultimate goal is becoming harder to grasp. I hope we’ll soon be able to explore AI’s true potential through AGI, though admittedly, that might be a bit too optimistic on my part.
Is GenAI much different to some of our own friends, family and colleagues? Thinking back to when I was first a people manager working with a junior team, I would always have to check over their work. Was it accurate? Was the grammar ok? What about the tone? I do the same now, with any output from a GPT. The advantage now though, is that I can go back and get it to reiterate almost instantly. I can take it's work, tweak it, correct it, soften it, make it more persuasive without potentially upsetting someone or explaining to them why I've made edits. I also know some people that tend to exaggerate, or go along with an answer because it's easier, or spurt out statement with zero fact checking.
It seems we're holding GenAI to much higher standards that we do humans, and that's arguably correct given the scale and the potential implications, but as long as we continue to treat it as a junior colleague or a carefree friend, we'll be in a better place.
Thanks. My point isn't an argument, more agreeing with you; AI isn't a magic wand, and quite far off. But is it still useful? I'd say yes, with caveats. It's a tool, and people need to know its pros and cons. The challenge I see in enterprise business, is that there's a pressure to use it, but lots of governance to get in place, too few real use-cases, and a general lack of vision as to how it fits in existing workflows (at scale).
This is like watching an implosion occur in real time.
The fact that average can't see its use or value has been what was expected the entire time. It's there to replace average. Its the same as when mass production replaced all the artisans Or all the assemby jobs for shipped over seas and the service industry replaced the manual jobs.
You have to be a creative to realize it's value. You have to be an adhder, big picture creative. Then its the greatest gift you have ever been given.
Agreed. You need to know your craft first, just like with other tools, to know how to get the most of it. It's common sense.
Generative AI is like Donald Trump - a great attention getter even though it hallucinates.
ChatGPT says it cannot cite any of the data upon which it has trained. Apparently, its developers didn’t feel the need to credit the original source of information or otherwise prove what it is saying is true. Since citations of purloined source material could be used in copyright infringement cases, I suspect its inability is a feature not a bug.
ChatGPT wasn't coded as a traditional computer program. It's not easy for the devs to make it cite appropriately. That doesn't change the fact that it should, of course.
It's no surprise! We have seen the AI hype story several times in the last 50 years, and it is no different this time.
Overall, AI Hype got ahead of reality, and a half-backed product was sold as the best thing since sliced bread.
This is something I keep trying to tell people: "Writers and amateurs find ChatGPT mediocre at best (they remain hopeful of future versions, but I have my doubts; averageness is imbued into GPT’s design)." It's not generative, it's derivative.
The reality is starting to settle: Generative models are a series of tools among many that can accomplish specific tasks within specific constraints.
That said, my experience is that the infrastructure needed to get the most out of genAI is not yet ready at most organizations. For now, we can only focus on the low-hanging fruits. In a few years, we might expect more fundamental changes to happen (though probably nothing magical). Change will come slowly but surely.
Agreed. AI only needs us to temper our outsized expectations. I believe we're on the cusp of the field's golden age. I'm writing a long post on this, taking a more optimistic stance.
Btw, love your work.
Thanks. Likewise, great work you're doing with TAB!
Thanks for a rational perspective.
The Pew Research survey is illuminating indeed. Thanks for sharing. And we should keep in mind that responses to the question are from the early adopters who actually paid (presumably) to use the most advanced models. In reference to the technology/innovation adoption model, if the innovators/early adopters are themselves feeling somewhat underwhelmed, it makes me wonder if later adopters (early majority) will be enticed to bother. Time will tell.
Aside from dealing with hallucinations, I feel like AI is currently in a stagnant phase, where companies are pushing as many AI products as possible to the market. It’s no longer making an impact, and the ultimate goal is becoming harder to grasp. I hope we’ll soon be able to explore AI’s true potential through AGI, though admittedly, that might be a bit too optimistic on my part.
Is GenAI much different to some of our own friends, family and colleagues? Thinking back to when I was first a people manager working with a junior team, I would always have to check over their work. Was it accurate? Was the grammar ok? What about the tone? I do the same now, with any output from a GPT. The advantage now though, is that I can go back and get it to reiterate almost instantly. I can take it's work, tweak it, correct it, soften it, make it more persuasive without potentially upsetting someone or explaining to them why I've made edits. I also know some people that tend to exaggerate, or go along with an answer because it's easier, or spurt out statement with zero fact checking.
It seems we're holding GenAI to much higher standards that we do humans, and that's arguably correct given the scale and the potential implications, but as long as we continue to treat it as a junior colleague or a carefree friend, we'll be in a better place.
Outdated (and tired) argument. See this: https://open.substack.com/pub/aiguide/p/can-large-language-models-reason or this: https://www.thealgorithmicbridge.com/p/you-are-killing-your-greatest-ideas-5ff
Thanks. My point isn't an argument, more agreeing with you; AI isn't a magic wand, and quite far off. But is it still useful? I'd say yes, with caveats. It's a tool, and people need to know its pros and cons. The challenge I see in enterprise business, is that there's a pressure to use it, but lots of governance to get in place, too few real use-cases, and a general lack of vision as to how it fits in existing workflows (at scale).
But you're comparing ChatGPT with people. That's what those links are for. The rest we agree, yes.