It’s a mystery to me why people wouldn’t want to embrace very helpful AI tools. It’s like adding and incorporating a second brain which has enhanced capacities. While it’s true that these second brains in whatever products you’re trying to embrace can seem on occasion a little like an idiot Savant, the capacity to Harness these tools and to incorporate them into your skill sets and refined thinking methods truly amplifies the individuals power and capabilities. Why people wouldn’t want that is quite simply beyond me.
It’s a bit like dissing eyeglasses. They’ll never catch on. They’re a fad.
For example, just this morning, I got an offer from my credit card company to take advantage of some “fantastic offer”. It was a classic case of, “The BIG print giveth, and the small print takeeth away”. The front page was all ballyhoo about the offer, while the back page was just filled to the brim with teeny tiny small print about the offer. Lacking motivation or competence to weigh through such fine print, all I had to do was take a picture of the front and backside of the offer and feed it to GPT-4o along with the question: What are the pros and cons of this offer and please analyze the small print in detail and tell me about any overlooked consequences that might be detrimental. I didn’t even have to type that as I just spoke it like I am now. So I get my answer in about 2 1/2 seconds and it looks like a pretty raw deal. So into the trash it goes.
In this respect, this one small inquiry saved me a tiny financial paper cut. It took about three minutes to do the whole thing and it was even kind of fun (correction- it was entirely fun). It empowered me, and just as importantly it disempowered the credit card company. So again, who wouldn’t want to use tools like this to stop the bleeding from all these little paper cuts. I probably use AI 10 times a day for any number of tasks and it just makes life hands-down better! People have to start to understand that. They need to do more basic and grounded thinking, and less irrational emoting over “technology”.
Otherwise it’s like hating the airplane because of some crashes and using them as a weapon. However, if we abolished the airplane tomorrow because of a fixation on negative emotions there would justifiably be more than a tremendous outcry. Turns out a lot of good comes from the airplane after all. So Yeah, time to shift focus on AI now. Turns out the glass is actually pretty full after all.
AI is going to have to start to become as accepted as eyeglasses for very pragmatic reasons. Perhaps eyeglasses were once regarded as supernatural tools of the devil. I’d call that a transitory phase. There’s nothing quite like getting people to understand eyeglasses like trying them on.
I think the big problem is that for every small convenience like this AI presents a heap of downsides. Cool, you parsed fine print faster. But we just surpassed 50% of content on the internet being AI generated, making the internet less useful. An AI generated recipe led me to undercook chicken, but hey, at least I didn’t have to use a cookbook. It offers minor conveniences in exchange for massive ethical problems, like undetectable fake videos, people forming unhealthy relationships with chatbots, the ability to generate spam messages in seconds, intelligence operations are able to weaponize it to control the discourse online. The invention of glasses didn’t come with these kinds of downsides. None of this seems worth the very limited benefits of text and image generation, which typically take so much tweaking I might as well have done it myself.
I think there are certainly some applications like medical research where it has its benefits but the public rollout as a free tool seems to be leading to a lot of risk with little reward.
..."the capacity to Harness these tools and to incorporate them into your skill sets and refined thinking methods truly amplifies the individuals power and capabilities." I think this is one of the key problems with "AI". By turning our thinking over to it, we diminish ourselves. We lose the ability to read carefully and closely. This is an especially big problem for children. If they never have to read carefully and think hard with artificial help, they will not develop. Using your example, people need to be able to think for themselves and identify biased messages or scams. Add all this to the fact that "AI" results are often biased or incorrect, I think we all need to slow down a bit and not accept AI so readily. Hopefully this doesn't sound too aggressive - just stating my case.
I'm a marketer and I freaking love this story. Fantastic use of the tech. So many people in my profession are charlatans. It makes it hard for those of who care about getting it right to be successful. I'm so ready for lazy and dishonest "growth" people to be shown the door.
Well George, I think you got that significantly wrong. Something that’s very much unlike parsing junk mail is passing a rather hard technical exam in 30 minutes. It’s an exam that usually takes six hours to complete. I created a GPT in about 40 minutes to do just that for a corporate exam. Then it only took me about 30 minutes to do an exam that normally takes six hours, in my past experience. The custom GPT I created passed the exam with a score of 100%. It was nice to have about 5 extra hours on my hands as a professional pilot. I’m pretty sure it’s not as trivial as you’re making out. Here’s a few basic examples:
EXAM Questions and Answers for OPSPEC B029 Pilot Training (CBT 24.3)
All questions answered correctly by OPSPEC B029 Pilot Training Custom GPT
Method 2 drift down allows the airplane to continue flight from cruising altitude to an airport where landing can be made under 121.197 (landing limitations: alternate airports), the airplane must clear all terrain and obstructions within five statute miles of the intended track by at least ______ feet vertically and with a positive slope at ________ feet above the airport, where the airplane lands after an engine fails. Please fill in the blanks.
Answer:
Under Method 2 drift down, the airplane must clear all terrain and obstructions within five statute miles of the intended track by at least 2,000 feet vertically and with a positive slope at 1,500 feet above the airport where the airplane lands after an engine fails.
Question:
What is the distance from the beginning of the leg (JAGIT) in nautical miles, and at which point is the escape route below safe to use? JAGIT/YXY MORA=189 DRIFTDOWN OFFSET 050NM NCA13 YXY Q969 GOROV GOROV2 CYXY B/O 32103 DIST 329NM TIME 01:10
Answer:
The distance from the beginning of the leg (JAGIT) in nautical miles, at which point the escape route is safe to use, is 50 nautical miles. This is indicated by the "DRIFTDOWN OFFSET 050NM" in the provided route details.
—- Anyway, you get the idea. Or perhaps you don’t. These are tools that make a professionals life easier. You best start getting used to the idea that these are only going to spread, get better. They’re already easier and more practical to use. Otherwise it’d be like trying to argue that snail mail is every bit as good as email. That’d be a hard sell. But I’m just the messenger here. So how does that expression go about messengers? Yeah, don’t shoot.
Is this a kind of satire? Using ChatGPT to pass a pilot's exam is a net good to society because of the five hours the pilot can spend doing something else?
You cheating on your pilot exam does not make me feel safe. What is a heart surgeon cheated all the way through school? Maybe you think it's just one test, so no big deal. I would think you ought to know everything.
I understand your point of view George, and I do take your point. I think it really depends on the nature of the “exam” in question and what the purpose of that is. I’m an older guy and I can remember a time when I was a kid when the use of calculators during an exam was looked at as cheating. After all, aren’t you gonna have to be able to scratch that math out on paper if your calculator breaks? Became a quaint argument, because certainly a student should’ve learned his multiplication and division, but usually the real point was to get through the cognitive chaff so that you could demonstrate a deeper understanding other subject not so much the brute force of solving it. After all, you can use a pick and shovel if you want to, but a backhoe, is much more practical for moving, through the labor. Of course anyone whether they are an academic or a professional needs to build their foundational knowledge by immersing themselves in content. In the pilot world, at an airline, that can often mean familiarizing yourself with about 20,000 pages of stuff. Yet there are times when we need to focus more on the nature of the subject, whether it’s by human or an AI so we can suss out the nuances of an issue. And in the professional world, I think there’s a lot to be said for being able to save time and get to the point. I expect this is also true a variety of worlds, such as just getting ready for the bar exam. I’m pretty sure that you’re not allowed use AI to take the bar exam (although it’s been demonstrated to pass it on its own). However, where using AI would shine is to prepare for the bar exam. That would give a person tremendous advantages in understanding the subject when you nitpick ideas apart to a high degree until you have a full understanding. Whenever you go to a law office and you see those vast bookshelves containing what looks like mind numbing volumes behind an attorneys desk, I think you can be pretty sure that that guy or gal has not read all those books. But he does need to know where to go to find the stuff that’s relevant to his legal position. In fact, there can be a lot of different vantage points on a single subject in a variety of those volumes. So the real test of tools is whether they can bring that all together. Humans would certainly struggle with finding all of the disparate concepts that have relevance. And very often there is no one right answer. Yet this is actually another strength of the AI. It’s because in only a matter of seconds it can discern and present multiple directions and considerations about a particular issue. Now a highly brilliant legal mind it’s been at work for 40 years. Might be able to synthesize some aspects of that and bring things together, but the real advantage is to have essentially anyone collect all those vantage points and size them up on their own merits. Remember, this is a tool. The people use to discern the best approach. So for professionals, it’s really not about passing some exam for a score. Even in the OPSPEC exam that I cited, it was more about sussing out the approach to the information that can be discussed.
It’s true there’s often a definitive right or wrong answers, and any professional has to be able to be able to demonstrate why it’s exactly right or exactly wrong. Yet another edge of the AI is that it can rapidly explain how it got that correct answer and precisely what the source material was. Then the nice thing about that is if you want more details to go even deeper into the onion, then you can simply ask more probing questions and it will take you on a journey of exploring those answers in greater detail. But I do take your point about rote memory items, and being restricted in certain settings. However, in a larger arc, this all allows people to build competence and understanding around any subject or question because the level of clarity is also quite good. If it’s not really clear you just ask for even more clarification.
We’ve all had lectures by really bad teachers and lecturers that were not only very poorly delivered but also entirely confusing, in which the presentation actually created negative learning (and bred apathy). They didn’t really help us to understand or think through an idea much at all. Using the AI gives us the new ability to not necessarily just get the right answer (although at 3 o’clock in the morning for a pilot that’s a huge plus) but also to sort out anything that we don’t quite understand.
I also want to anticipate another thought you might be having which is about hallucinations and giving us bad info. Yes, that capacity certainly does exist and it’s a bit thorny. However, when you’re dealing with professional subject matter, there is a really good remedy, and it’s is to absolutely restrict an AI, (or in many cases a customized GPT), to a knowledge base which only encompasses the professional material involved and nothing else. That way, the AI is only swimming in your personal data lake, as opposed to an entire ocean of content which might produce more irrelevant and error prone for answers. So if you constrain the AI to only examining those, say, 20,000 pages of company content, then it distills all of its reasoning from that knowledge base alone. That’s what I did with the OPSPEC GPT. It was restricted to about 3000 pages of content that didn’t go outside of that subject. So you can design tools to give the AI tremendous focus, and that way it doesn’t lead you astray with hallucinations about other crap.
In fact I’ve created a whole business around this:
"That would give a person tremendous advantages in understanding the subject when you nitpick ideas apart to a high degree until you have a full understanding." I think the danger is that people are relying on "AI" to interpret and understand material. Reading complex texts is hard, but it's good to figure things out.
I just retired as an English teacher. In the past two years, I never allowed my students to use AI and made them write by hand. Some may call me a luddite, but so what? Plus, all the AP exams high school students take have always had hand-written essays.
There are a large number of statements you make in defense of tech that would need much better support to be asserted as axiomatic, as you have done.
That said, the fundamental error is equating technology with progress. Technology is value neutral - it can enable healers and tyrants, and does not essentially improve.
I am also a tech person, though I studied CS and English Literature at University. I used to be 100% pro-technology until I realized that technology was value neutral, and the more advanced the tech, the more advanced the civilization needed to be to handle it.
Yes, we feed more people, but we also irredeemably deplete soil and food stocks at a faster rate. Our political system cannot accommodate the thinking required for sound judgment, so the technology will likely end poorly.
Perhaps you will grow this way over time. Or perhaps I am wrong.
Thanks for the respectful counter. I agree that part would require a better defense but I never expected that part to be the debatable one. I agree that as tech becomes more advanced we'd need to become more advanced as well (and we aren't). I disagree that we deplete soil and food stocks at a faster rate *because* of technology. That makes no sense to me. Our techniques to grow food becomes much more efficient thanks to technology. Just like we soon will be able to not use fossil fuels at all thanks to renewable energy supply. Medicine is an obvious example. I agree not all technology is good. I've written extensively about that. But as an activity that humanity does, it's hard to argue against it, really.
We have to challenge our assumptions. We think we are feeding people better because we can produce nitrogen at scale instead of scraping bird shit off of Pacific reefs. But we fail to understand that organic life is complex and interdependent, and yields poorly to engineering.
Yes we can juice yields, but we kill adjacent systems, deplete water systems, and rob the soil of other minerals and nutrients, resulting in lower quality food and sterilized land.
We need to understand that generative AI is a compression algorithm, so it is by definition lossy. It cannot contain the world because the lossless map of the world is the world. A compression of a compression of a compression will always enshittify, especially when we forget we are compressing.
This is the same reason grand political schemes fail. They compress a complex system into simple armatures and algorithms then forget they are a compression. Over time the system is so divorced from the world that it is held together by habit and force not utility.
In any case, this is a super fun topic and I thank you for the opportunity to discuss!!!
Yeah, I certainly agree that any worthy debate about technology as a whole is way more complex than I showed here. I read recently Pinker's Enlightenment Now and it influenced my thinking on this a lot. But I've had other influences that sent me in quite different directions so again, I agree it's a complex topic.
I just saw this. I would strongly recommend that you read "Seeing Like a State" by James Scott
Pinker is very smart, but extremely reductive. He does not see the world as fundamentally complex, preferring simple reduction to dependent variables. There is value in it, but it is quite limited.
The only thing I'd add to this is that many people see GenAI as a tool that's taking jobs from humans, making more work for the people who have to use it (not less, no matter what their managers may think), and doing an inferior job in almost all cases. That's good reason to be deeply resentful.
Marc Watkins just wrote a post about the "Friend" device and what it might say about how deeply pathological our relationship to these technologies are. GenAI definitely has a public image problem if they think this teaser would appeal to literally anyone with a shred of humanity left: https://www.youtube.com/watch?v=O_Q1hoEhfk4
Yeah, I linked to the "AI friend" in the sentence "they don't know how to read the room." How does anyone think it's a good idea to publish that video as a *non-skit* teaser for a new AI device? They're so deep in their SFO bubble, it's ridiculous.
Alberto consider that you're not the target market for these products. As far as business goes, companies like Characters.ai are killing it. They get about 200-300million visitors per month and the average session time was a crazy at 33min. In other words, it makes financial sense so someone will do it. There's enough lonely people to make this a viable product.
I believe the real question go deeper than a knee jerk reaction. AI is here, over the next year GPUs will get 100x better meaning that AI will become fast, very cheap and much higher quality.
It will become the ultimate tool. And here's the rub, tools aren't values neutral. They transform the way we perceive and interactive with the world. They change our relationships. Just consider social media's impact on our culture and AI will make that look like child's play.
Knowing that it's here and that there's huge incentives to be a winner in AI tech companies and countries are willing to risk it all. Safety, morality and ethics are used to justify their positions.
Now seeing the writing on the wall, I believe the best choice is to explore use cases that are good for the individual and the society. To create Ai systems that offer a positive choice and that brings out the best in us.
I mean: https://blog.character.ai/our-next-phase-of-growth/. Character.ai founder (and genius mind behind the project) Noam Shazeer is back to Google. So perhaps they're not "killing it". People (teens) are using it? Yeah. But it costs a fuckton of money to run. It's just too expensive, even with the advances in postprocessing and quantization and whatever. The only way to make this a viable product is if there's enough customers *and the company is profitable.* Generative AI is mostly not profitable and for what we know, may never be. The links to the data are in the article. GPUs have a physical limit to how much better they can get. If Nvidia keeps showing off those amazing numbers it's because they're trading off other things. For instance. The new chips are much less precise (which doesn't matter for AI, okay) but still - it makes the improvement look much larger than it is. Also: "It will become the ultimate tool"? I know where you're coming from but that kind of assertion really needs much stronger evidence that you could find anywhere online. It's yet to be proven that it can fulfill that promise. Anyway, I agree with the sentiment in your last paragraph, just thought these clarifications were in order.
Let's not miss the forest for the trees. Characters AI is an example of a trend. Recently, Meta announced that you can now create your own characters on FG & IG and even a digital extension of yourself.
Weather Meta, Characters AI, Replika, or some other company eventually figures it out is immaterial. What matters is that we're are trending towards a world that will contain virtual AI Agents, digital humans, etc. This of course should not be at all surprising giving that it this tech simply mimic our actual world.
Regarding the economics, this argument of it being too expensive and not profitable enough is very common with new technology. If you recall, professional investors made the same argument about Facebook, Tesla, and other tech companies. That they was over valued, burning through cash and unprofitable. But they figured it out. FB was able to turn attention into cash and then brought their costs down.
When you look at the market leaders today, they are tech companies.
The moral of the story here is that most will fail, very few will win and the winners will become the next set of trillion dollar companies.
Again: https://www.theinformation.com/articles/meta-scraps-celebrity-ai-chatbots-that-fell-flat-with-users. Really, it's not the story you think it is. The argument of it being expensive isn't that common. The internet was a solution that allowed many things to be done more cheaply than how they were done before. Generative AI has created search engines that cost 100x what Google search costs. It's a tech in search of its value proposition without any clear sign that it will actually be coat effective compared to all the other tech it intends to replace. I mean, I understand what you mean but the details tell a more nuanced story than "it will all get sorted out eventually." Nah some technologies don't quite work out. I think AI will. I think genAI won't for the most part. I think what comes next is a mix of bubble bursting, AI winter, and a new shape of AI that's yet to be revealed (I've written before about what I think it is). Thinking in terms of generative AI is the incorrect framing anyway.
Totally agree. And I expect it to get much worse and much more vocally vehement as soon as the average Joe or Jane feels personally threatened enough, which I expect is probably within a year depending on next gen frontier model releases and potentially warning shot controversies and/or catastrophes. Then it might feel all Butlerian up in here.
I would say---do not write off generative AI so quickly.
As someone who is an early adopter of tools, I use ChatGPT everyday (and have built a few very basic GPT assistants to help me). Most of my colleagues have a life and don't want to be bothered with experimenting. However, it is pretty clear that generative AI will just become part of the landscape when it comes to writing and the creation of illustrations. The hatred is real---I get that----but why wouldn't I use a tool that saves me so much time?
We non-techies take a while to catch on to tools. Let's revisit this conversation in 5 years when in-house training and widespread use has caught on.
Yeah, I use Claude every day, too. This isn't about using or not the technology. That part of the conversation isn't even mentioned in the post. This is about the sentiment - it has drastically changed in the past six months. And it goes beyond perception: the money, the productivity, the economic growth, are not coming. Hype is a double edged sword. The first edge hit hard for two years. The other edge has just cut through the entire story. We'll see the consequences.
Understood. In higher education, where I work, there is real pushback when it comes to AI---at least on the East Coast. (When I have talked with faculty from other parts of the country, I sense a more positive attitude about AI.) I am seeing, or perhaps only anecdotally intuiting---that there is a real divide developing between people teaching in the humanities and those in STEM. But there is more to it than just that. Many faculty are on board with programs like ChatGPT and Claude, seeing them as equalizers for students who find it difficult to compose thoughts in writing, while others are so rabidly anti-AI that they have moved back to in-class exams and Blue Books. To me, there is beginning to be an "elitist" feel to a lot of these conversations about AI in the classroom. Very little direction is being given from admins at the top. I assume they are terrified.
Yeah, in the trenches of everyday life the feelings vary a lot. I am very much pro-AI at the personal use level. It's a tool and if you learn to use it, quite a powerful one. But I believe for now it stays there, as a personal tool for private use, whether for learning, work, or entertainment. The picture I paint is at the sociocultural and business/economics levels, which is quite a different story.
I can tell you that, even on Hacker News, the surest way to attract a lot of hate and downvotes is to include AI-generated images in your post. Or ChatGPT's lengthy and "helpful" answers.
But I bet all of those people secretly use it anyway. I just asked it what Beatles album "I Should Have Known Better" was on, for example.
I respectfully disagree with the dichotomy you present between love and hate for this technology. This framing seems to dramatize the situation unnecessarily.
Your observations about the shortcomings of AI promoters are astute. However, market responses - like a major company cancelling its Copilot contract due to low ROI - may naturally temper excessive hype. This feedback loop could drive improvements in both the technology and its implementation.
To borrow from your railway station imagery, generative AI isn't a train that people are either desperately rushing to board or vehemently avoiding. Instead, it's more like a new line being constructed - some are excited, others wary, but most are simply watching its progress, waiting to see where it might take us once it's fully operational.
Hey Pascal, thank you for the comment. I never said there is a dichotomy between love and hate for this technology, but for technology as a category. And it's not really a dichotomy, but rather two gravity attractors that people naturally approach based on their upbringing or experiences. My emphasis is on the fact that if you know what is good about technology the only rational response is to love what it has given us. If you don't know it, the natural response is to distrust it and, yes, hate it, should you conflate the process and progress with the faces and companies that do it now. Generative AI is just one example of a technology of unproven value, as all the metrics say and I have been saying for many months now, that has been hyped and called revolutionary ad nauseam long before it deserved to be credited with such a responsibility (because it is a burden to bear such promises). Are most people keeping an eye on its progress? I would argue that most simply don't care. As for those who do care, most are seeing that it simply doesn't work as the companies promised. The AI industry has overhyped it and will end up paying the price. The first signs are already there. As I wrote somewhere else: hype is a double edged sword. The first edge sent generative AI to the world's consciousness. The other edge has just cut through the entire story - and there will be consequences.
You wrote "No wonder people hate generative AI." That's hate toward GenAI, not Technology as a category.
Secondly, as someone who closely follows trends in GenAI promotion, media coverage, and implementation (for example, in numerous schools), I have a different perspective on the situation. I don't perceive the widespread "hate" you describe, nor do I see overwhelming "love" for that matter.
I hope you'll forgive me for saying this with a wink 😉, but the essay's approach feels a bit "click-baity" 😉😉. I mean this playfully, of course! It's an engaging read, but perhaps at the cost of a more balanced analysis of public sentiment towards GenAI.
But I never established a dichotomy between love and hate toward generative AI. The only possible dichotomy is in the first section and it's about technology as a category. I don't see why anyone would love generative AI. I certainly see why people may hate it. I guess we can have different perspectives if we look at different parts of the world. The headline of course feels clickbaity if you disagree! But that's fine, I believe the sources I chose say enough about the generalized backlash I perceive (as a side note: first, I appreciate you telling me this as this kind of hard-to-receive honesty is scarce. Second, all headlines fight to exist in the fine line between clickbait and just enough interestingness to attract people. It's a risky game but I believe I'm not overdoing it - hopefully! (I believe I've written worst clickbait titles in the past)). Anyway, the most important part isn't the sentiment but the metrics. The story they tell is the story I've been warning about for a long time. I recall you disagreed and thought "revolutionary" was a reasonable descriptor of generative AI. I guess the AI industry overdid its hand. Now it's time for the next phase.
When I find is that what people really dislike are the extractive business models that underpin these technologies? Not the tech technologies themselves.
So are we doomed by our own tools? For thousands of years human beings have made tools to serve their interests. This is the first time in history that I can think of where our tools impressive, though they may be serve the interest of others at our expense.
Love this take, Alberto. We’ve seen this story before—blockchain, crypto, VR—huge potential, yet public perception tanks due to mismanagement, overhype, and lack of real-world value. AI seems to be following the same path, despite its promise of massive economic gains. Is generative AI just another overhyped tech trend, or are we failing to introduce it in a way that benefits society?
I liken generative AI to psychedelics. When you first encounter it, it seems world altering. "This is important. This changes everything." But then when you try to apply it to some useful activity, it proves squirrely.
There is a shortage of sincerity in the world, a lack of honesty in how we relate to each other in our daily lives. AI will make this worse, creating another interface between us - on top of the already ubiquitous one of social media.
When people rant against CEOs (and the main beneficiaries of the increased productivity that technology yields) it seems to me, that we forget that we also ask for *meritocracy* but based on what if it’s not based on production?
To the point I found it a bit rhetorical at times, with the unchecked assumption that more tech is better.
It is *SO* obvious we're better off!
AI is progress - why do these idiots not want it?
Well, maybe this progress is relative. Before even thinking about AI -
Are we really healthier than when we roamed and hunted, before many of our diseases even took root?
And richer than when inequality was limited by an upper limit to accumulation (gotta carry that stuff) and different social structures where consent of the governed was *actually* required?
"Poor people today live better than kings of yore" assumes that we are materialistic, not social creatures, and somehow glosses over the comparison between poor and king *today*.
And then, AI. As if enshittification needed a turbo.
Sure, AI has tremendous potential. So did the Internet. Would you say the latter put enlightenment, connection, and knowledge in everyone's pockets, or did it turn us into myopic automatons hooked on serotonin?
It's not about "for or against progress / AI" though. We can't go back, not after we discovered Pokémon Go and remote controllers.
Maybe the point isn't to go *back* though, at least not to the barefoot wildling our ancestors are depicted as. But to acknowledge technology isn't neutral, especially when we build it in the systems and mindsets we do today.
We don't just need to make better use of the tools - we need better tools, or no tools at all.
Another commenter pointer to the environmental cost of tech, and I couldn't find any mention of AI's impact and what it takes to generate an image of a chicken doing a kick flip either.
Maybe people hate AI because it sucks. But they actually have many good reasons.
Oh dear! Where is statistical evidence that people hate AI? People still don't know what is AI and the only thing they see is ChatGPT writing low quality content (simply becouse average users don't know how to prompt). Good article anyeays
It’s a mystery to me why people wouldn’t want to embrace very helpful AI tools. It’s like adding and incorporating a second brain which has enhanced capacities. While it’s true that these second brains in whatever products you’re trying to embrace can seem on occasion a little like an idiot Savant, the capacity to Harness these tools and to incorporate them into your skill sets and refined thinking methods truly amplifies the individuals power and capabilities. Why people wouldn’t want that is quite simply beyond me.
It’s a bit like dissing eyeglasses. They’ll never catch on. They’re a fad.
For example, just this morning, I got an offer from my credit card company to take advantage of some “fantastic offer”. It was a classic case of, “The BIG print giveth, and the small print takeeth away”. The front page was all ballyhoo about the offer, while the back page was just filled to the brim with teeny tiny small print about the offer. Lacking motivation or competence to weigh through such fine print, all I had to do was take a picture of the front and backside of the offer and feed it to GPT-4o along with the question: What are the pros and cons of this offer and please analyze the small print in detail and tell me about any overlooked consequences that might be detrimental. I didn’t even have to type that as I just spoke it like I am now. So I get my answer in about 2 1/2 seconds and it looks like a pretty raw deal. So into the trash it goes.
In this respect, this one small inquiry saved me a tiny financial paper cut. It took about three minutes to do the whole thing and it was even kind of fun (correction- it was entirely fun). It empowered me, and just as importantly it disempowered the credit card company. So again, who wouldn’t want to use tools like this to stop the bleeding from all these little paper cuts. I probably use AI 10 times a day for any number of tasks and it just makes life hands-down better! People have to start to understand that. They need to do more basic and grounded thinking, and less irrational emoting over “technology”.
Otherwise it’s like hating the airplane because of some crashes and using them as a weapon. However, if we abolished the airplane tomorrow because of a fixation on negative emotions there would justifiably be more than a tremendous outcry. Turns out a lot of good comes from the airplane after all. So Yeah, time to shift focus on AI now. Turns out the glass is actually pretty full after all.
AI is going to have to start to become as accepted as eyeglasses for very pragmatic reasons. Perhaps eyeglasses were once regarded as supernatural tools of the devil. I’d call that a transitory phase. There’s nothing quite like getting people to understand eyeglasses like trying them on.
I think the big problem is that for every small convenience like this AI presents a heap of downsides. Cool, you parsed fine print faster. But we just surpassed 50% of content on the internet being AI generated, making the internet less useful. An AI generated recipe led me to undercook chicken, but hey, at least I didn’t have to use a cookbook. It offers minor conveniences in exchange for massive ethical problems, like undetectable fake videos, people forming unhealthy relationships with chatbots, the ability to generate spam messages in seconds, intelligence operations are able to weaponize it to control the discourse online. The invention of glasses didn’t come with these kinds of downsides. None of this seems worth the very limited benefits of text and image generation, which typically take so much tweaking I might as well have done it myself.
I think there are certainly some applications like medical research where it has its benefits but the public rollout as a free tool seems to be leading to a lot of risk with little reward.
..."the capacity to Harness these tools and to incorporate them into your skill sets and refined thinking methods truly amplifies the individuals power and capabilities." I think this is one of the key problems with "AI". By turning our thinking over to it, we diminish ourselves. We lose the ability to read carefully and closely. This is an especially big problem for children. If they never have to read carefully and think hard with artificial help, they will not develop. Using your example, people need to be able to think for themselves and identify biased messages or scams. Add all this to the fact that "AI" results are often biased or incorrect, I think we all need to slow down a bit and not accept AI so readily. Hopefully this doesn't sound too aggressive - just stating my case.
I'm a marketer and I freaking love this story. Fantastic use of the tech. So many people in my profession are charlatans. It makes it hard for those of who care about getting it right to be successful. I'm so ready for lazy and dishonest "growth" people to be shown the door.
Well George, I think you got that significantly wrong. Something that’s very much unlike parsing junk mail is passing a rather hard technical exam in 30 minutes. It’s an exam that usually takes six hours to complete. I created a GPT in about 40 minutes to do just that for a corporate exam. Then it only took me about 30 minutes to do an exam that normally takes six hours, in my past experience. The custom GPT I created passed the exam with a score of 100%. It was nice to have about 5 extra hours on my hands as a professional pilot. I’m pretty sure it’s not as trivial as you’re making out. Here’s a few basic examples:
EXAM Questions and Answers for OPSPEC B029 Pilot Training (CBT 24.3)
All questions answered correctly by OPSPEC B029 Pilot Training Custom GPT
https://chatgpt.com/g/g-39mH9plsD-opspec-b029-pilot-training
Question:
Method 2 drift down allows the airplane to continue flight from cruising altitude to an airport where landing can be made under 121.197 (landing limitations: alternate airports), the airplane must clear all terrain and obstructions within five statute miles of the intended track by at least ______ feet vertically and with a positive slope at ________ feet above the airport, where the airplane lands after an engine fails. Please fill in the blanks.
Answer:
Under Method 2 drift down, the airplane must clear all terrain and obstructions within five statute miles of the intended track by at least 2,000 feet vertically and with a positive slope at 1,500 feet above the airport where the airplane lands after an engine fails.
Question:
What is the distance from the beginning of the leg (JAGIT) in nautical miles, and at which point is the escape route below safe to use? JAGIT/YXY MORA=189 DRIFTDOWN OFFSET 050NM NCA13 YXY Q969 GOROV GOROV2 CYXY B/O 32103 DIST 329NM TIME 01:10
Answer:
The distance from the beginning of the leg (JAGIT) in nautical miles, at which point the escape route is safe to use, is 50 nautical miles. This is indicated by the "DRIFTDOWN OFFSET 050NM" in the provided route details.
—- Anyway, you get the idea. Or perhaps you don’t. These are tools that make a professionals life easier. You best start getting used to the idea that these are only going to spread, get better. They’re already easier and more practical to use. Otherwise it’d be like trying to argue that snail mail is every bit as good as email. That’d be a hard sell. But I’m just the messenger here. So how does that expression go about messengers? Yeah, don’t shoot.
Wait, you cheated on your pilot exam? Please let me know what airline and I’ll never use them again.
See the above reply for clarity. Rest easy. It’s not what you think.
Thanks, I'll still pass.
Is this a kind of satire? Using ChatGPT to pass a pilot's exam is a net good to society because of the five hours the pilot can spend doing something else?
You cheating on your pilot exam does not make me feel safe. What is a heart surgeon cheated all the way through school? Maybe you think it's just one test, so no big deal. I would think you ought to know everything.
I understand your point of view George, and I do take your point. I think it really depends on the nature of the “exam” in question and what the purpose of that is. I’m an older guy and I can remember a time when I was a kid when the use of calculators during an exam was looked at as cheating. After all, aren’t you gonna have to be able to scratch that math out on paper if your calculator breaks? Became a quaint argument, because certainly a student should’ve learned his multiplication and division, but usually the real point was to get through the cognitive chaff so that you could demonstrate a deeper understanding other subject not so much the brute force of solving it. After all, you can use a pick and shovel if you want to, but a backhoe, is much more practical for moving, through the labor. Of course anyone whether they are an academic or a professional needs to build their foundational knowledge by immersing themselves in content. In the pilot world, at an airline, that can often mean familiarizing yourself with about 20,000 pages of stuff. Yet there are times when we need to focus more on the nature of the subject, whether it’s by human or an AI so we can suss out the nuances of an issue. And in the professional world, I think there’s a lot to be said for being able to save time and get to the point. I expect this is also true a variety of worlds, such as just getting ready for the bar exam. I’m pretty sure that you’re not allowed use AI to take the bar exam (although it’s been demonstrated to pass it on its own). However, where using AI would shine is to prepare for the bar exam. That would give a person tremendous advantages in understanding the subject when you nitpick ideas apart to a high degree until you have a full understanding. Whenever you go to a law office and you see those vast bookshelves containing what looks like mind numbing volumes behind an attorneys desk, I think you can be pretty sure that that guy or gal has not read all those books. But he does need to know where to go to find the stuff that’s relevant to his legal position. In fact, there can be a lot of different vantage points on a single subject in a variety of those volumes. So the real test of tools is whether they can bring that all together. Humans would certainly struggle with finding all of the disparate concepts that have relevance. And very often there is no one right answer. Yet this is actually another strength of the AI. It’s because in only a matter of seconds it can discern and present multiple directions and considerations about a particular issue. Now a highly brilliant legal mind it’s been at work for 40 years. Might be able to synthesize some aspects of that and bring things together, but the real advantage is to have essentially anyone collect all those vantage points and size them up on their own merits. Remember, this is a tool. The people use to discern the best approach. So for professionals, it’s really not about passing some exam for a score. Even in the OPSPEC exam that I cited, it was more about sussing out the approach to the information that can be discussed.
It’s true there’s often a definitive right or wrong answers, and any professional has to be able to be able to demonstrate why it’s exactly right or exactly wrong. Yet another edge of the AI is that it can rapidly explain how it got that correct answer and precisely what the source material was. Then the nice thing about that is if you want more details to go even deeper into the onion, then you can simply ask more probing questions and it will take you on a journey of exploring those answers in greater detail. But I do take your point about rote memory items, and being restricted in certain settings. However, in a larger arc, this all allows people to build competence and understanding around any subject or question because the level of clarity is also quite good. If it’s not really clear you just ask for even more clarification.
We’ve all had lectures by really bad teachers and lecturers that were not only very poorly delivered but also entirely confusing, in which the presentation actually created negative learning (and bred apathy). They didn’t really help us to understand or think through an idea much at all. Using the AI gives us the new ability to not necessarily just get the right answer (although at 3 o’clock in the morning for a pilot that’s a huge plus) but also to sort out anything that we don’t quite understand.
I also want to anticipate another thought you might be having which is about hallucinations and giving us bad info. Yes, that capacity certainly does exist and it’s a bit thorny. However, when you’re dealing with professional subject matter, there is a really good remedy, and it’s is to absolutely restrict an AI, (or in many cases a customized GPT), to a knowledge base which only encompasses the professional material involved and nothing else. That way, the AI is only swimming in your personal data lake, as opposed to an entire ocean of content which might produce more irrelevant and error prone for answers. So if you constrain the AI to only examining those, say, 20,000 pages of company content, then it distills all of its reasoning from that knowledge base alone. That’s what I did with the OPSPEC GPT. It was restricted to about 3000 pages of content that didn’t go outside of that subject. So you can design tools to give the AI tremendous focus, and that way it doesn’t lead you astray with hallucinations about other crap.
In fact I’ve created a whole business around this:
https://www.customgptsolutions.ai/
It’s always nice chatting with you George. Have a great day.
"That would give a person tremendous advantages in understanding the subject when you nitpick ideas apart to a high degree until you have a full understanding." I think the danger is that people are relying on "AI" to interpret and understand material. Reading complex texts is hard, but it's good to figure things out.
I just retired as an English teacher. In the past two years, I never allowed my students to use AI and made them write by hand. Some may call me a luddite, but so what? Plus, all the AP exams high school students take have always had hand-written essays.
I think you’ll find this demonstration clearly interesting. I’d love to hear your thoughts on it George:
OpenAI’s New AI: Being Smart Is Overrated!
https://youtu.be/qt-B2cg0pCM?si=d-4OgIJFuUBfMrGu
There are a large number of statements you make in defense of tech that would need much better support to be asserted as axiomatic, as you have done.
That said, the fundamental error is equating technology with progress. Technology is value neutral - it can enable healers and tyrants, and does not essentially improve.
I am also a tech person, though I studied CS and English Literature at University. I used to be 100% pro-technology until I realized that technology was value neutral, and the more advanced the tech, the more advanced the civilization needed to be to handle it.
Yes, we feed more people, but we also irredeemably deplete soil and food stocks at a faster rate. Our political system cannot accommodate the thinking required for sound judgment, so the technology will likely end poorly.
Perhaps you will grow this way over time. Or perhaps I am wrong.
Thanks for the respectful counter. I agree that part would require a better defense but I never expected that part to be the debatable one. I agree that as tech becomes more advanced we'd need to become more advanced as well (and we aren't). I disagree that we deplete soil and food stocks at a faster rate *because* of technology. That makes no sense to me. Our techniques to grow food becomes much more efficient thanks to technology. Just like we soon will be able to not use fossil fuels at all thanks to renewable energy supply. Medicine is an obvious example. I agree not all technology is good. I've written extensively about that. But as an activity that humanity does, it's hard to argue against it, really.
Great response, and thank you.
We have to challenge our assumptions. We think we are feeding people better because we can produce nitrogen at scale instead of scraping bird shit off of Pacific reefs. But we fail to understand that organic life is complex and interdependent, and yields poorly to engineering.
Yes we can juice yields, but we kill adjacent systems, deplete water systems, and rob the soil of other minerals and nutrients, resulting in lower quality food and sterilized land.
We need to understand that generative AI is a compression algorithm, so it is by definition lossy. It cannot contain the world because the lossless map of the world is the world. A compression of a compression of a compression will always enshittify, especially when we forget we are compressing.
This is the same reason grand political schemes fail. They compress a complex system into simple armatures and algorithms then forget they are a compression. Over time the system is so divorced from the world that it is held together by habit and force not utility.
In any case, this is a super fun topic and I thank you for the opportunity to discuss!!!
Yeah, I certainly agree that any worthy debate about technology as a whole is way more complex than I showed here. I read recently Pinker's Enlightenment Now and it influenced my thinking on this a lot. But I've had other influences that sent me in quite different directions so again, I agree it's a complex topic.
I just saw this. I would strongly recommend that you read "Seeing Like a State" by James Scott
Pinker is very smart, but extremely reductive. He does not see the world as fundamentally complex, preferring simple reduction to dependent variables. There is value in it, but it is quite limited.
Yes, seems like disconnected to the humanity
The only thing I'd add to this is that many people see GenAI as a tool that's taking jobs from humans, making more work for the people who have to use it (not less, no matter what their managers may think), and doing an inferior job in almost all cases. That's good reason to be deeply resentful.
Marc Watkins just wrote a post about the "Friend" device and what it might say about how deeply pathological our relationship to these technologies are. GenAI definitely has a public image problem if they think this teaser would appeal to literally anyone with a shred of humanity left: https://www.youtube.com/watch?v=O_Q1hoEhfk4
Yeah, I linked to the "AI friend" in the sentence "they don't know how to read the room." How does anyone think it's a good idea to publish that video as a *non-skit* teaser for a new AI device? They're so deep in their SFO bubble, it's ridiculous.
Alberto consider that you're not the target market for these products. As far as business goes, companies like Characters.ai are killing it. They get about 200-300million visitors per month and the average session time was a crazy at 33min. In other words, it makes financial sense so someone will do it. There's enough lonely people to make this a viable product.
I believe the real question go deeper than a knee jerk reaction. AI is here, over the next year GPUs will get 100x better meaning that AI will become fast, very cheap and much higher quality.
It will become the ultimate tool. And here's the rub, tools aren't values neutral. They transform the way we perceive and interactive with the world. They change our relationships. Just consider social media's impact on our culture and AI will make that look like child's play.
Knowing that it's here and that there's huge incentives to be a winner in AI tech companies and countries are willing to risk it all. Safety, morality and ethics are used to justify their positions.
Now seeing the writing on the wall, I believe the best choice is to explore use cases that are good for the individual and the society. To create Ai systems that offer a positive choice and that brings out the best in us.
I mean: https://blog.character.ai/our-next-phase-of-growth/. Character.ai founder (and genius mind behind the project) Noam Shazeer is back to Google. So perhaps they're not "killing it". People (teens) are using it? Yeah. But it costs a fuckton of money to run. It's just too expensive, even with the advances in postprocessing and quantization and whatever. The only way to make this a viable product is if there's enough customers *and the company is profitable.* Generative AI is mostly not profitable and for what we know, may never be. The links to the data are in the article. GPUs have a physical limit to how much better they can get. If Nvidia keeps showing off those amazing numbers it's because they're trading off other things. For instance. The new chips are much less precise (which doesn't matter for AI, okay) but still - it makes the improvement look much larger than it is. Also: "It will become the ultimate tool"? I know where you're coming from but that kind of assertion really needs much stronger evidence that you could find anywhere online. It's yet to be proven that it can fulfill that promise. Anyway, I agree with the sentiment in your last paragraph, just thought these clarifications were in order.
Let's not miss the forest for the trees. Characters AI is an example of a trend. Recently, Meta announced that you can now create your own characters on FG & IG and even a digital extension of yourself.
Weather Meta, Characters AI, Replika, or some other company eventually figures it out is immaterial. What matters is that we're are trending towards a world that will contain virtual AI Agents, digital humans, etc. This of course should not be at all surprising giving that it this tech simply mimic our actual world.
Regarding the economics, this argument of it being too expensive and not profitable enough is very common with new technology. If you recall, professional investors made the same argument about Facebook, Tesla, and other tech companies. That they was over valued, burning through cash and unprofitable. But they figured it out. FB was able to turn attention into cash and then brought their costs down.
When you look at the market leaders today, they are tech companies.
The moral of the story here is that most will fail, very few will win and the winners will become the next set of trillion dollar companies.
The real question is 'at what cost?'.
Again: https://www.theinformation.com/articles/meta-scraps-celebrity-ai-chatbots-that-fell-flat-with-users. Really, it's not the story you think it is. The argument of it being expensive isn't that common. The internet was a solution that allowed many things to be done more cheaply than how they were done before. Generative AI has created search engines that cost 100x what Google search costs. It's a tech in search of its value proposition without any clear sign that it will actually be coat effective compared to all the other tech it intends to replace. I mean, I understand what you mean but the details tell a more nuanced story than "it will all get sorted out eventually." Nah some technologies don't quite work out. I think AI will. I think genAI won't for the most part. I think what comes next is a mix of bubble bursting, AI winter, and a new shape of AI that's yet to be revealed (I've written before about what I think it is). Thinking in terms of generative AI is the incorrect framing anyway.
Totally agree. And I expect it to get much worse and much more vocally vehement as soon as the average Joe or Jane feels personally threatened enough, which I expect is probably within a year depending on next gen frontier model releases and potentially warning shot controversies and/or catastrophes. Then it might feel all Butlerian up in here.
I would say---do not write off generative AI so quickly.
As someone who is an early adopter of tools, I use ChatGPT everyday (and have built a few very basic GPT assistants to help me). Most of my colleagues have a life and don't want to be bothered with experimenting. However, it is pretty clear that generative AI will just become part of the landscape when it comes to writing and the creation of illustrations. The hatred is real---I get that----but why wouldn't I use a tool that saves me so much time?
We non-techies take a while to catch on to tools. Let's revisit this conversation in 5 years when in-house training and widespread use has caught on.
Yeah, I use Claude every day, too. This isn't about using or not the technology. That part of the conversation isn't even mentioned in the post. This is about the sentiment - it has drastically changed in the past six months. And it goes beyond perception: the money, the productivity, the economic growth, are not coming. Hype is a double edged sword. The first edge hit hard for two years. The other edge has just cut through the entire story. We'll see the consequences.
Understood. In higher education, where I work, there is real pushback when it comes to AI---at least on the East Coast. (When I have talked with faculty from other parts of the country, I sense a more positive attitude about AI.) I am seeing, or perhaps only anecdotally intuiting---that there is a real divide developing between people teaching in the humanities and those in STEM. But there is more to it than just that. Many faculty are on board with programs like ChatGPT and Claude, seeing them as equalizers for students who find it difficult to compose thoughts in writing, while others are so rabidly anti-AI that they have moved back to in-class exams and Blue Books. To me, there is beginning to be an "elitist" feel to a lot of these conversations about AI in the classroom. Very little direction is being given from admins at the top. I assume they are terrified.
Yeah, in the trenches of everyday life the feelings vary a lot. I am very much pro-AI at the personal use level. It's a tool and if you learn to use it, quite a powerful one. But I believe for now it stays there, as a personal tool for private use, whether for learning, work, or entertainment. The picture I paint is at the sociocultural and business/economics levels, which is quite a different story.
Got ya' but not selling my NVDIA . . . yet (even though this week has been a bear! ;)
I can tell you that, even on Hacker News, the surest way to attract a lot of hate and downvotes is to include AI-generated images in your post. Or ChatGPT's lengthy and "helpful" answers.
But I bet all of those people secretly use it anyway. I just asked it what Beatles album "I Should Have Known Better" was on, for example.
I respectfully disagree with the dichotomy you present between love and hate for this technology. This framing seems to dramatize the situation unnecessarily.
Your observations about the shortcomings of AI promoters are astute. However, market responses - like a major company cancelling its Copilot contract due to low ROI - may naturally temper excessive hype. This feedback loop could drive improvements in both the technology and its implementation.
To borrow from your railway station imagery, generative AI isn't a train that people are either desperately rushing to board or vehemently avoiding. Instead, it's more like a new line being constructed - some are excited, others wary, but most are simply watching its progress, waiting to see where it might take us once it's fully operational.
Hey Pascal, thank you for the comment. I never said there is a dichotomy between love and hate for this technology, but for technology as a category. And it's not really a dichotomy, but rather two gravity attractors that people naturally approach based on their upbringing or experiences. My emphasis is on the fact that if you know what is good about technology the only rational response is to love what it has given us. If you don't know it, the natural response is to distrust it and, yes, hate it, should you conflate the process and progress with the faces and companies that do it now. Generative AI is just one example of a technology of unproven value, as all the metrics say and I have been saying for many months now, that has been hyped and called revolutionary ad nauseam long before it deserved to be credited with such a responsibility (because it is a burden to bear such promises). Are most people keeping an eye on its progress? I would argue that most simply don't care. As for those who do care, most are seeing that it simply doesn't work as the companies promised. The AI industry has overhyped it and will end up paying the price. The first signs are already there. As I wrote somewhere else: hype is a double edged sword. The first edge sent generative AI to the world's consciousness. The other edge has just cut through the entire story - and there will be consequences.
You wrote "No wonder people hate generative AI." That's hate toward GenAI, not Technology as a category.
Secondly, as someone who closely follows trends in GenAI promotion, media coverage, and implementation (for example, in numerous schools), I have a different perspective on the situation. I don't perceive the widespread "hate" you describe, nor do I see overwhelming "love" for that matter.
I hope you'll forgive me for saying this with a wink 😉, but the essay's approach feels a bit "click-baity" 😉😉. I mean this playfully, of course! It's an engaging read, but perhaps at the cost of a more balanced analysis of public sentiment towards GenAI.
But I never established a dichotomy between love and hate toward generative AI. The only possible dichotomy is in the first section and it's about technology as a category. I don't see why anyone would love generative AI. I certainly see why people may hate it. I guess we can have different perspectives if we look at different parts of the world. The headline of course feels clickbaity if you disagree! But that's fine, I believe the sources I chose say enough about the generalized backlash I perceive (as a side note: first, I appreciate you telling me this as this kind of hard-to-receive honesty is scarce. Second, all headlines fight to exist in the fine line between clickbait and just enough interestingness to attract people. It's a risky game but I believe I'm not overdoing it - hopefully! (I believe I've written worst clickbait titles in the past)). Anyway, the most important part isn't the sentiment but the metrics. The story they tell is the story I've been warning about for a long time. I recall you disagreed and thought "revolutionary" was a reasonable descriptor of generative AI. I guess the AI industry overdid its hand. Now it's time for the next phase.
When I find is that what people really dislike are the extractive business models that underpin these technologies? Not the tech technologies themselves.
So are we doomed by our own tools? For thousands of years human beings have made tools to serve their interests. This is the first time in history that I can think of where our tools impressive, though they may be serve the interest of others at our expense.
People can smell a rat, and won't be gaslit into believing the whiff is roses
Love this take, Alberto. We’ve seen this story before—blockchain, crypto, VR—huge potential, yet public perception tanks due to mismanagement, overhype, and lack of real-world value. AI seems to be following the same path, despite its promise of massive economic gains. Is generative AI just another overhyped tech trend, or are we failing to introduce it in a way that benefits society?
I liken generative AI to psychedelics. When you first encounter it, it seems world altering. "This is important. This changes everything." But then when you try to apply it to some useful activity, it proves squirrely.
There is a shortage of sincerity in the world, a lack of honesty in how we relate to each other in our daily lives. AI will make this worse, creating another interface between us - on top of the already ubiquitous one of social media.
When people rant against CEOs (and the main beneficiaries of the increased productivity that technology yields) it seems to me, that we forget that we also ask for *meritocracy* but based on what if it’s not based on production?
Thanks for the perspective. Quite optimistic!
To the point I found it a bit rhetorical at times, with the unchecked assumption that more tech is better.
It is *SO* obvious we're better off!
AI is progress - why do these idiots not want it?
Well, maybe this progress is relative. Before even thinking about AI -
Are we really healthier than when we roamed and hunted, before many of our diseases even took root?
And richer than when inequality was limited by an upper limit to accumulation (gotta carry that stuff) and different social structures where consent of the governed was *actually* required?
"Poor people today live better than kings of yore" assumes that we are materialistic, not social creatures, and somehow glosses over the comparison between poor and king *today*.
And then, AI. As if enshittification needed a turbo.
Sure, AI has tremendous potential. So did the Internet. Would you say the latter put enlightenment, connection, and knowledge in everyone's pockets, or did it turn us into myopic automatons hooked on serotonin?
It's not about "for or against progress / AI" though. We can't go back, not after we discovered Pokémon Go and remote controllers.
Maybe the point isn't to go *back* though, at least not to the barefoot wildling our ancestors are depicted as. But to acknowledge technology isn't neutral, especially when we build it in the systems and mindsets we do today.
We don't just need to make better use of the tools - we need better tools, or no tools at all.
Another commenter pointer to the environmental cost of tech, and I couldn't find any mention of AI's impact and what it takes to generate an image of a chicken doing a kick flip either.
Maybe people hate AI because it sucks. But they actually have many good reasons.
Oh dear! Where is statistical evidence that people hate AI? People still don't know what is AI and the only thing they see is ChatGPT writing low quality content (simply becouse average users don't know how to prompt). Good article anyeays