13 Comments
User's avatar
Steffan's avatar

Most smart people in our world today get gratification by helping a billionaire become a bigger billionaire. Now that we can inject our personal projects with the same level of team power as we can at work, we freeze because our values are still dependent on a billionaire telling us what to do. Personally, I've had a project going on for years with a small community to use EEG data to measure Jhana meditation states. The need was obvious, so I always worked hard on it, never making much progress but always getting small hints I was on the right path. Now I can accelerate and it is tremendously satisfying - but only because I stayed laser focused on this goal for years despite no hope of becoming a millionaire and not encouragement from billionaires.

Alberto Romero's avatar

So true and btw super interesting project. I've thought about this (whether someone had measured jhana states) but didn't look much into it. Do you have any interesting insight??

Ky's avatar

This post rubbed me the wrong way, not in a I hate a AI how dare you kind of way, but in the way of this can’t be right but I don’t know why… I think it’s because as human beings our main job is maintaining homeostasis, or to put in a physics terms, maintaining our own complexity. that is the true battle that life faces: spectacular organization in the face of entropy and chaos, and that’s at the base level before we consider the blood thirstiness and distruction of our own species. What humanity wants to do is build even more complexity, and AI tools help with that. Yes, that takes energy. As the data center energy wars show, it takes a lot of energy. But that’s not the only thing it takes. It’s takes effort. Learning takes effort. Building muscle takes effort. Construction takes effort. Societies take effort.Yes we can minimize that effort with the tools that we invent and we have been doing that forever, and we’ve done it again with AI, but imagining this life where we just dream stuff up and then it happens… I just can’t help thinking, life doesn’t work like that, like literal carbon based lifeforms. AI agents also introduce a lot of entropy into the system, which is what you could call hallucinations again from a physics perspective. Fighting entropy is hard. We’ve gotten really good at it and built evermore complex systems, but they are also ever more likely to be subject to entropy the more complex that they are and the more energy they take to maintain. I’m almost done with school and don’t use any AI agents, not because I’m on some crusade, but because I’m aware of the value of hard and the increasing complexity of my brain through the search. The difference has become more and more stark with the classmates who don’t agree. I can visibly watch them fail to problem solve, or come up a good idea and spit out from a chat bot the absolute average and obvious, because that’s all they can put in. I recognize school is a fake world where the point is to challenge yourself, and why shouldn’t we make real work easier, but I just don’t think we’re ever going to get to easy. Entropy is too large of a force and complexity is too vulnerable for all of us to be skipping around dreaming of whatever we want without degrading the inputs we add to AI because our brains are literally wasting away, or spinning out a ton of entropy and chaos because more and more complex systems require more and more energy which by definition means they are going to be more and more entropic and chaotic.

Alberto Romero's avatar

I have an article half written about the importance of not removing 100% of friction. The optimum degree of friction is NOT zero. I think it's related to what you say. If I hadn't written this post, I wouldn't feel so well either!

Decisionful's avatar

Thanks for writing this, really thoughtful. Curious how you think about going deeper from "whats" into the "whys"?

Alberto Romero's avatar

In this post the whys are left implicit. I assume the why is simply "because that's what you want." Without going deeper than that. I think think in terms of what vs how is more important at this moment. Do you agree?

Decisionful's avatar

Yes, I agree, focusing on what vs. how makes it more approachable at this stage. Also I think at some point there will need to be a back-and-forth with the whys to really uncover the deeper motivations.

Dana van der Merwe's avatar

Wise words, O Oracle.

FruitofTheBloom's avatar

This post has inspired me to write something a tad more philosophical (in french at https://www.philosophes.org/philosophies/avec-lia-que-reste-t-il-de-lhumain-au-travail/

It's a philosophical exploration of the same question raised by Alberto's post, but through the lens of Aristotle, Arendt, Sartre, and Zhuangzi. As AI makes technical skill cheap and abundant, what becomes truly valuable is practical wisdom, the ability to decide what is worth doing rather than how to do it. In addition, Arendt's vita activa shows that when production costs nothing, human value shifts back toward initiative, judgment, and meaning-making. The ultimate question is what remains irreducibly human in the age of AI ? I think what matters most is our own set of values, as a person and as a member of society.

Dana van der Merwe's avatar

WHAT have you just written?!

Alberto Romero's avatar

Haha my notes on AI agents! Do you agree?

__browsing's avatar

Whatever my life's ambition might have been, I'm pretty sure it wasn't irrelevancy.

Alberto Romero's avatar

Yeah, me neither. I don't think most people will be irrelevant. But at times life imposes a change on us and we have a choice: act now or act later.