Excellent research and compilation, Thanks!, but I couldn't see if you mention that most of the academic research on this matter do not consider the following limitations: A) "demographic blindness” causing an overgeneralization of the findings and results, where most of the studies originate from the U.S and high-income countries which overlook other cultures or emerging economies. This is known as the "WEIRD" research centered on “Western, Educated, Industrialized, Rich, and Democratic populations. B) Also most of these studies use university students as subjects, (Again, I couldnt see if you mention it) ignoring those with low digital literacy or limited economic resources. C) I don´t see if you mention the lack of disaggregation to break down cognitive or emotional results by gender identity, socioeconomic status, or cultural background, treating the "user" as a generic, neutral entity. Anyway, despite these, your compilation is very good to consider on how AI Affects our Brain. Thanks :)
The performance-competence dissociation you've documented maps exactly onto what I see in organizations. The clients who say AI has 'transformed their productivity' are almost always describing output quality, never output capability. The Wharton finding is the most troubling piece: high confidence in AI is the strongest predictor of cognitive surrender -- and high confidence is exactly what success stories generate. The loop closes on itself. The real risk isn't the technology. It's the feedback mechanism that makes the most dependent users feel the most capable.
A few days ago, I caught myself behaving like an algorithm at work. A request from a third party affecting his life (that I would once have approached with greater compassion), was dismissed by me the moment it failed to satisfy one session from my checklist’s prerequisites. No hesitation, no second thought, no human compassion for the consequences. Later, when I heard the third party’s complaints, I was shaken (not because they were right), but because I realized how I had operated. Even though the decision to dismiss that request was typically right, my mind procedure to not even think the consequences or FEEL just human compassion is alarming.
That quality of AI may perhaps prove useful in the rehabilitation of criminals. Applied to ordinary human beings, however, it is dangerous.
First, the lack of recognition of incorrect results from AI should be compared to the old (and seldom referenced today) notion "computers don't make mistakes". I wonder to what degree this is the novelty of AI, and that as people get familiar with it, their skepticism will increase.
Second, I would like to see a comparison of understanding before and after the introduction of AI in children with the introduction of calculators to students. My experience (50 years ago) was that students who started on pocket calculators tended not to have the same degree of intuitive understanding of numbers as those who began before this (I was right on the cusp of the change). Has this stayed the same, gotten better, or become worse. This could be a useful indicator as to how AI usage and dependence will evolve.
Excellent research and compilation, Thanks!, but I couldn't see if you mention that most of the academic research on this matter do not consider the following limitations: A) "demographic blindness” causing an overgeneralization of the findings and results, where most of the studies originate from the U.S and high-income countries which overlook other cultures or emerging economies. This is known as the "WEIRD" research centered on “Western, Educated, Industrialized, Rich, and Democratic populations. B) Also most of these studies use university students as subjects, (Again, I couldnt see if you mention it) ignoring those with low digital literacy or limited economic resources. C) I don´t see if you mention the lack of disaggregation to break down cognitive or emotional results by gender identity, socioeconomic status, or cultural background, treating the "user" as a generic, neutral entity. Anyway, despite these, your compilation is very good to consider on how AI Affects our Brain. Thanks :)
The performance-competence dissociation you've documented maps exactly onto what I see in organizations. The clients who say AI has 'transformed their productivity' are almost always describing output quality, never output capability. The Wharton finding is the most troubling piece: high confidence in AI is the strongest predictor of cognitive surrender -- and high confidence is exactly what success stories generate. The loop closes on itself. The real risk isn't the technology. It's the feedback mechanism that makes the most dependent users feel the most capable.
A few days ago, I caught myself behaving like an algorithm at work. A request from a third party affecting his life (that I would once have approached with greater compassion), was dismissed by me the moment it failed to satisfy one session from my checklist’s prerequisites. No hesitation, no second thought, no human compassion for the consequences. Later, when I heard the third party’s complaints, I was shaken (not because they were right), but because I realized how I had operated. Even though the decision to dismiss that request was typically right, my mind procedure to not even think the consequences or FEEL just human compassion is alarming.
That quality of AI may perhaps prove useful in the rehabilitation of criminals. Applied to ordinary human beings, however, it is dangerous.
This was great. Thank you.
quality post, bookmarked for closer studies. thank you
Two areas worth comparing to AI:
First, the lack of recognition of incorrect results from AI should be compared to the old (and seldom referenced today) notion "computers don't make mistakes". I wonder to what degree this is the novelty of AI, and that as people get familiar with it, their skepticism will increase.
Second, I would like to see a comparison of understanding before and after the introduction of AI in children with the introduction of calculators to students. My experience (50 years ago) was that students who started on pocket calculators tended not to have the same degree of intuitive understanding of numbers as those who began before this (I was right on the cusp of the change). Has this stayed the same, gotten better, or become worse. This could be a useful indicator as to how AI usage and dependence will evolve.