Hi Alberto - This was probably the best article I had the pleasure to read this week. The way you organize the conclusions from the study made my time reading this really, really well used. Great job!
Great article! Just restacked. Hopefully we're able to move to a system that encourages active cognitive use of AI, rather than passive, non-reflective use. Though the recent reports of how people are dissociating when engaging with AI doesn't give me much hope in the medium-term.
Yep, agreed. I guess those articles (and the ones to come) focus on the lowest strata of AI users whereas mine focuses on the average. I guess this reflects better what people encounter and the approaches they can take to avoid the pitfalls
What about the real world groups like “Brain to LLM to Brain to Brain to LLM to Brain to LLM to Brain?” Im not sure how these results apply. Are they too reductive? 200 pages of microscopic detail on one particular study design raises red flags for me.
It's neuroimaging so it's long because they've included all the details haha but yes, all studies (on anything literally) are "too reductive". Even physicists treat objects as their ideal counterparts so...
"becoming stochastic parrots of a stochastic parrot" -- this is exactly my fear... well, one of them. AI is such an excellent tool, but how can I really know if I'm using it too much or in the wrong way so that it's actually eroding my cognitive abilities. A social drinker seldom knows when he's crossing the line into becoming an alcoholic.
That's why the golden rule is so important: you have to keep assessing your thinking patterns and cognitive abilities periodically. Just like we do with any other skill. Are you losing attention from scrolling too much? Then stop!
Hi Alberto -- articles like this are super helpful. I may not have come across this paper, and your breakdown of the key findings helped make it accessible. Please keep sharing work like this, it is so appreciated.
"The sample is small, meaning the study may lack the statistical power needed to reliably detect small or moderate effects; a much larger sample would be required for that."
Sounds like it was designed to fail, or at least, since the topic is "political" (ex potentially highly contentious), it was designed to taste the subject without actually wanting to eat and digest the meal, if that makes sense.
Since the study concerns education, it would be interesting to follow-up with a 1 or 2 week interval, to try and infer memory and knowledge retention. I suspect the LLM session 1-3 group would score poorly and worse vs Brain 1-3.
Hi, you write they used ChatGPT with GPT-4o, but a Time article says: "She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AlI summaries declared that the paper was trained on GPT-4o. "We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that," she says, laughing."...?
"stochastic parrots of a stochastic parrot" - love it!
Hi Alberto - This was probably the best article I had the pleasure to read this week. The way you organize the conclusions from the study made my time reading this really, really well used. Great job!
Thank you William, glad it was helpful! :)
Great article! Just restacked. Hopefully we're able to move to a system that encourages active cognitive use of AI, rather than passive, non-reflective use. Though the recent reports of how people are dissociating when engaging with AI doesn't give me much hope in the medium-term.
Yep, agreed. I guess those articles (and the ones to come) focus on the lowest strata of AI users whereas mine focuses on the average. I guess this reflects better what people encounter and the approaches they can take to avoid the pitfalls
What about the real world groups like “Brain to LLM to Brain to Brain to LLM to Brain to LLM to Brain?” Im not sure how these results apply. Are they too reductive? 200 pages of microscopic detail on one particular study design raises red flags for me.
It's neuroimaging so it's long because they've included all the details haha but yes, all studies (on anything literally) are "too reductive". Even physicists treat objects as their ideal counterparts so...
"becoming stochastic parrots of a stochastic parrot" -- this is exactly my fear... well, one of them. AI is such an excellent tool, but how can I really know if I'm using it too much or in the wrong way so that it's actually eroding my cognitive abilities. A social drinker seldom knows when he's crossing the line into becoming an alcoholic.
That's why the golden rule is so important: you have to keep assessing your thinking patterns and cognitive abilities periodically. Just like we do with any other skill. Are you losing attention from scrolling too much? Then stop!
Hi Alberto -- articles like this are super helpful. I may not have come across this paper, and your breakdown of the key findings helped make it accessible. Please keep sharing work like this, it is so appreciated.
Thank you Max, glad it was helpful!
Actually, I would say you are dumb if you use ChatGpt in the first place. Sorry, but you threw a softball there.
But seriously, think for yourself. You don’t need it. You really don’t.
Great write-up!
"The sample is small, meaning the study may lack the statistical power needed to reliably detect small or moderate effects; a much larger sample would be required for that."
Sounds like it was designed to fail, or at least, since the topic is "political" (ex potentially highly contentious), it was designed to taste the subject without actually wanting to eat and digest the meal, if that makes sense.
Since the study concerns education, it would be interesting to follow-up with a 1 or 2 week interval, to try and infer memory and knowledge retention. I suspect the LLM session 1-3 group would score poorly and worse vs Brain 1-3.
Hi, you write they used ChatGPT with GPT-4o, but a Time article says: "She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AlI summaries declared that the paper was trained on GPT-4o. "We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that," she says, laughing."...?