Discussion about this post

User's avatar
Jack Brown's avatar

I almost feel like the NYT article (and yours to a lesser extent) is missing the forest for the trees. Its an obvious journalistic attempt to draw fear from people. And I think if it was framed better, an article like this could get my attention. I think the general discourse should be around how these LLMs affect human social health. These deaths are horrible, and to your point, should not be seen as quick statistics against or for LLMs. But a discourse around how people interact with these technologies and how they shape their beliefs, social happiness and more should be studied more fervently. I think making an article for a few deaths really undermines how people, particularly children, will develop emotional relationships with these models that aren't exactly known how they work.

Expand full comment
Stephen Fitzpatrick's avatar

It's been the case for while that MSM is drowning so they have to continue to push narratives that drive clicks and create revenue. One of the reasons I am enjoying Substack is clearly the freedom to write what you want with no editorial constraints and get instant feedback as to whether or not your point of view is valid or at least can find a readership. I do think there is a potential story in there, but I agree this isn't it - the goal clearly was to incentivize the pile-on towards the negatives of AI (of which there are many) but cherry picking a few examples of people who were likely suffering from mental health crises prior to using AI is misleading to say the least. I do like the chart - I teach a class where we often discuss media bias and I will add this to my arsenal of similar material demonstrating how much what we consume significantly distorts our perception of reality. I assume your familiar with Factfulness which makes similar points. Nice post.

Expand full comment
14 more comments...

No posts