44 Comments
User's avatar
The One Alternative View's avatar

Here's what I'm thinking. This is a brilliant paper. It demonstrates the asymmetrical effects of AI. And as you have aptly put it, the social media influencers may be jumping at attention-grabbing words only because that is how they achieve online engagement.

But as you have mentioned, that there is an optimal situation for using AI, I don't think it can become feasible for us to continually use it optimally. I'm reminded of Kahneman's book, Thinking, Fast and Slow, where the author confesses that his deep understanding of biases does not stop him from falling victim to them.

Our brains were not wired to reason, but to budget. If AI allows us to reduce the budgetary costs, we will take that option. Few people hope to improve their cognitive power. Heck, a good number of your subscribers do. But that may be a small tail in the distribution. As Gurwinder has previously mentioned in his correspondence with Freya India, the smart will acceleratingly get smarter, although I prefer to use the word high-agency individuals. Same for the low agency guys. That split may continue to widen.

Ryan McClure's avatar

What a great book, Thinking Fast and Slow changed how I percieve the world. Also should be a prerequisite to Superforcasting by Tetlock, another seminal work.

The One Alternative View's avatar

For so long I have postponed reading Tetlock’s book. I guess this should be the hint

Matt Kelland's avatar

"stochastic parrots of a stochastic parrot" - love it!

S. C.  L'Heureux's avatar

Tools don't make you dumb, but nothing fixes pre existing stupidity

Andrew's avatar

Not feeling hopeful after reading this post! Sure, it illuminates a healthy path forward where we can leverage AI tools to our cognitive benefit, but is that the path that most people will choose — and that the AI tools themselves will nudge us toward? My overriding fear is that the designers of these products will create a UX that “rewards” users who, in the moment, lean on the AI maximally and outsource their cognition to it entirely. I also expect that many corporations that are practically yelling at their employees to adopt AI are unwittingly incentivizing workers to adopt cognitively unhealthy AI practices. The evangelical quest for speed and “efficiency” seems like it will devalue deep thinking, creativity, and original effort.

Alberto Romero's avatar

I share your concerns...

Julia Diez's avatar

Exactly this! It is like the claim: “screens are bad for kids”. Well, my 8 years old uses a maths app that has make her jumped from 1st grade to 3rd grade level.

Tyler Folkman's avatar

As others said a small sample size and I think we'll likely need more longitudinal data to see the long term impact. Well likely adapt with these tools, though, and change what we perceive as intelligence.

William Meller's avatar

Hi Alberto - This was probably the best article I had the pleasure to read this week. The way you organize the conclusions from the study made my time reading this really, really well used. Great job!

Alberto Romero's avatar

Thank you William, glad it was helpful! :)

Between the Lines of Power's avatar

Great article! Just restacked. Hopefully we're able to move to a system that encourages active cognitive use of AI, rather than passive, non-reflective use. Though the recent reports of how people are dissociating when engaging with AI doesn't give me much hope in the medium-term.

Alberto Romero's avatar

Yep, agreed. I guess those articles (and the ones to come) focus on the lowest strata of AI users whereas mine focuses on the average. I guess this reflects better what people encounter and the approaches they can take to avoid the pitfalls

Peter Gaffney's avatar

"becoming stochastic parrots of a stochastic parrot" -- this is exactly my fear... well, one of them. AI is such an excellent tool, but how can I really know if I'm using it too much or in the wrong way so that it's actually eroding my cognitive abilities. A social drinker seldom knows when he's crossing the line into becoming an alcoholic.

Alberto Romero's avatar

That's why the golden rule is so important: you have to keep assessing your thinking patterns and cognitive abilities periodically. Just like we do with any other skill. Are you losing attention from scrolling too much? Then stop!

Ryan McClure's avatar

Forgive me as I've only just subscribed to your substack after reading your review of another paper, "AI Models Are not Ready to Make Scientific Discoveries" and found this one. When I saw this MIT paper on essay writing I wrote it off as it's essentially an experiment on cheating. From an AI power users's perspective, I would never use AI to generate original ideas for dissemination. I would use a well-stocked RAG to act as a sounding board for idea iteration or to play them out before deciding on one, but writing is a craft. It should be a little painful but you should be steadily getting better - and we all know that sharpens the mind intuitively. I wrote off the paper as anti-AI hype but your peice here called me back to a much richer evalutation. Cheers.

Stonebatoni's avatar

Brilliant stuff. I work in a field where there are lots of consequential decisions that can’t always be prepared for, are often subjective, and which might not have tons of available research materials available in a google search. This means having AI to just get basic research laid out quickly allows me to dig deeper in a subsequent google search and double check what the AI is telling me and clarify in my mind what possible courses of action I could take. Hugely helpful, and helps me learn, but I really use the AI as kind of a super powered google search, as opposed to any kind of decision making tool on its own. I feel like this is the most beneficial to me short and long term and jives with your post.

NeuraFutures's avatar

Hello! Note that the study took place over a period of 4 months, not one day!

Alberto Romero's avatar

Yes, thank you, that was a typo on my part. Fixed!

Mickey Schafer's avatar

Yours is a much more balanced analysis than the one I read yesterday 😁. I have this study bookmarked... thanks for the mental scaffolding!

Alberto Romero's avatar

Glad you found it helpful Mickey!

Anchal's avatar

A very good read💯💯

Stefano's avatar

Great write-up!

"The sample is small, meaning the study may lack the statistical power needed to reliably detect small or moderate effects; a much larger sample would be required for that."

Sounds like it was designed to fail, or at least, since the topic is "political" (ex potentially highly contentious), it was designed to taste the subject without actually wanting to eat and digest the meal, if that makes sense.

Since the study concerns education, it would be interesting to follow-up with a 1 or 2 week interval, to try and infer memory and knowledge retention. I suspect the LLM session 1-3 group would score poorly and worse vs Brain 1-3.

Alberto Romero's avatar

It makes sense and that's what I suspect they did

Geoffe's avatar

What about the real world groups like “Brain to LLM to Brain to Brain to LLM to Brain to LLM to Brain?” Im not sure how these results apply. Are they too reductive? 200 pages of microscopic detail on one particular study design raises red flags for me.

Alberto Romero's avatar

It's neuroimaging so it's long because they've included all the details haha but yes, all studies (on anything literally) are "too reductive". Even physicists treat objects as their ideal counterparts so...

Max Headroom's avatar

Hi Alberto -- articles like this are super helpful. I may not have come across this paper, and your breakdown of the key findings helped make it accessible. Please keep sharing work like this, it is so appreciated.

Alberto Romero's avatar

Thank you Max, glad it was helpful!