20 Comments
Oct 18, 2023Liked by Alberto Romero

Very interesting, but you, like Marc, make the same bid that it is undeniable that technology is beneficial. There is no real introspection about whether there have been cultures that had a higher quality of life that simply failed because another group, likely with tech, destroyed them.

Tech is a self-fulfilling condition. If you don't have tech, those with tech will delete you one way or another. As such, you can make the claim that tech makes life better simply by claiming there are ample instances where not tech means you will be conquered or killed.

We will continue to pursue tech, and it will likely kill us, not because tech is good or bad, but because those who want dominance will use tech heedless of the downstream consequences.

Expand full comment
author

Well, my whole point is to debate the idea that technology is always net positive for humanity. I disagree with that. But your example is actually a good point: not tech means also that nature kills you and that's, deep down, Andreessen's whole point. I agree with that thoroughly.

Expand full comment
Oct 18, 2023Liked by Alberto Romero

As a clarification, I do not make the claim that low tech adoption means nature kills you.

My point is that cultures with low/slow tech adoption that may have a superior means of life get killed by sociopathic cultures with high tech adoption.

It is not unreasonable to make a broad claim that tech almost always leads to domination and death, and therefore the net benefits of tech (A/C, refrigeration, antibiotics) have had an unacceptable cost through genocide and war.

Take as a case in point your contention that writing is an absolute benefit. It is not hard to imagine that writing is a negative for society by accelerating the dispersal of technology and dramatically reducing societal diversity. Not saying writing is bad, but that it is not supportable to say that writing is a priori good.

I do recognize that there are certain tech advances that are absolutely quality of life improvements, notably the advances in the support of women during childbirth.

I think the fundamental issue with your and Marc's arguments is something that philosophers have been debating for many thousands of years, which is "what makes a good life." My issue is that there is an unquestioning assumption that how we live now is a good life, and therefore continuing on this path would inevitably lead to a better version of a good life. This has been debated by far smarter folks than Marc, you and me.

If you are going to make broad claims about "positive for humanity" then there is some responsibility to define your terms. What does positive for humanity mean? Freedom? Security? Stuff? Longevity? And for whom - Capital owners, the educated, the "useless" poor?

Marc's essay is just "smartdumb." He is obviously a very smart guy, but also not really willing or capable to dig under his assumptions, so he uses his intellect to support his unexamined positions and makes wildly unsupported assertions. Without introspection you end up with sophisticated nonsense.

For calibration, I am a tech CEO.

Expand full comment
author

I find it very interesting for a tech CEO to say this about technology! I agree with a few things you said: first how to live a good life is a matter of personal preference and should be defined somehow. I'd say that technology has improved our quality of life and expected lifespan over the centuries, which, if anything else, should be universally accepted measures of what a good life requires.

My article debates Marc's precisely on this in the sense that not all technology forwards these goals. I explicitly use a few well-known examples (atomic weapons and Facebook, for instance) to illustrate my points. So we agree on that. Where I think we disagree is that I feel you assume that tech is default net negative unless it's for very specific uses or steered carefully and therefore low tech is better. I don't think that's the case. Laying out a strong argument that writing is more bad than good is hard. It's too easy to argue for the contrary. I agree, however, that wide gaps between societies in technological advancement are a primary source of suffering for humans. That's a political and moral question that neither Marc nor I have touched upon, though.

Expand full comment

Great!

I don't assume tech is negative, but I think much if not most is. To reinforce your point: Marc says that the Internet has reduced loneliness. That is preposterous on the face. All measures of anxiety, depression and anomie are much higher than they were in the 60s to 80s.

If we erased the Internet, we would arguably be better off. Sure, we would lose some things, but in the aggregate, we would have a chance at preserving our democracy.

I would also challenge the idea that the targets are longevity and some vaguely determined "quality of life." Most serious thinkers through history would not claim either as clear, suitable or adequate. Quality of Life doesn't really even define anything. I would claim that most Americans at least have a very poor quality of life, and are only ahead in the count of "stuff."

If we are talking about the expansion of the rights and personhood of women, minorities, and the disabled, I would say that this could be argued as a quality of life improvement. Yet I think technology had not much to do with that.

What if you just posted what you or Marc mean by quality of life. Climate control? The ability to go to Europe? What does it actually mean, and perhaps what does it miss, such as free time, dignity, lack of fear, and contact with nature?

Expand full comment
author

We can define quality of life in different ways, with different variables. It doesn't really matter that much which one you take. Let's take for instance self-reported happiness, which could be a good proxy: https://ourworldindata.org/happiness-and-life-satisfaction#happiness-over-time

It doesn't matter which data you look at, technology has been so far a net positive in many many ways, like child mortality: (https://ourworldindata.org/child-mortality), poverty decrease (https://ourworldindata.org/poverty), etc. About the more recent digital innovations, I'm not that sure, though.

Expand full comment

And have you read "The Dawn of Everything?"

Expand full comment

Hi Alberto, thanks for the thoughtful post! You've got a new subscriber in me after this one!

I appreciated the way you illustrate the tradeoffs of technology. The story about Socrates and writing is a good example. A broader perspective on technology and how it shapes us is really valuable, especially today.

One wonder: how much we can buy into the techno-optimist framework without consciously or subconsciously affirming a particular worldview? It seems to me that the danger of a techno-optimist worldview and one of the reasons I struggle with that label myself is that it makes certain assumptions about what the problem is. With technology in hand (e..g, AI, pencil, keyboard, hammer, etc.) then we are shaped to think that at their root, the problems of the world can or should be solved by technology. In this framing, techno-optimism is a worldview that makes a certain claim about the root of what is wrong with the world and what the life we are looking for is. I wrote a piece a while back that might be relevant in this vein: https://joshbrake.substack.com/p/we-shape-our-tools-then-they-shape

My own thinking on this has been deeply shaped by Marshall McLuhan and Neil Postman and most recently by Dr. Ursula Franklin, a Canadian physicist with really insightful perspectives on the influence of technology. Although she was writing over 30 years ago, I think she has some very thoughtful perspectives on the techno-optimist perspective.

Thanks for broadening the conversation and sharing your own perspective. I've got a post that's brewing where I hope to unpack more of these ideas on my own Substack and will try to share that back when I publish it. Would love to hear your feedback and continue the conversation!

Expand full comment
author

Hey Josh, much appreciated! Let me answer by quoting you: "... then we are shaped to think that at their root, the problems of the world can or should be solved by technology." I don't think that's true. You can be optimistic about tech in that it is a force for good if we make the right decisions about it (that's the hard part) and still acknowledge that some problems, social, cultural, and moral problems aren't resoluble *just* with technology. That's, I think, where Marc and I mainly disagree. I don't it is the right approach to *blindly* follow technology as our north star.

To summarize, I think we can reconcile techno-optimism with the multifaceted nature of humanity and the problems we face collectively. Technology is just another tool in our toolkit (a powerful one, I must say) but not the panacea.

Expand full comment

Hi Alberto, thanks for your reply. I think we are pretty aligned on this. I also agree technology is a powerful tool in our toolkit to solve problems and yet isn't the end-all solution to all our problems.

What I'm still thinking more about is to what degree the technologies in our toolbox shape the way we define the problem and therefore the solution. This is a classic pitfall in design processes. It's so easy to skip the steps of empathy and problem definition and just jump in and trying to solve a problem without really understanding what's at its core. It's a particular challenge in our tech-centric world today.

Thanks again for your conversation and the post!

Expand full comment
author

Definitely agree with that. What is possible is in good part defined by the technology at our disposal. The question is, could we have gone any other way? In some sense tech is just a downstream application of science, which reflects the laws that govern the universe. It makes sense, to some degree, to assume that any different intelligent species would have gone a similar path at least to some point (we depend on the quantity of the materials at our disposal on Earth, the particular constraints of our bodies and minds, i.e., our biological endowment, our cognitive limits, etc.). If that's the case then perhaps there wasn't much variability of options at all and if our civilization collapsed and came back again we'd do pretty much the same thing. Who knows!

Expand full comment

PS, here’s my reflections from last week on the Andreessen essay. I’ve got another post queued up for tomorrow where I dig a bit more into philosophy of technology and what questions it suggests we should ask of our tech.

https://joshbrake.substack.com/p/what-is-the-life-were-looking-for

Expand full comment

Love this thread, thanks for continuing to pull on it with me. I think you make a good point about "could we have gone any other way?".

There is something fundamental about humans as creators, and in particular, creators of technology. Even if, as you say, our entire civilization collapses, it's hard to imagine that we wouldn't eventually end up going in a similar route because it is somehow hard-wired into what it means to be human.

Of course the flip side of this is that the things that we create are always marked, at least in some way, by the flaws and blindspots of their creators.

Expand full comment

Thoughts on this charming paragraph?

“David Friedman points out that people only do things for other people for three reasons – love, money, or force. Love doesn’t scale, so the economy can only run on money or force. The force experiment has been run and found wanting. Let’s stick with money.”

Love doesn’t scale.

Let’s stick with money.

Expand full comment
author

Unsurprising coming from Andreessen (a venture capitalist) but probably very unrelatable for most people. I'm not sure he cares about being relatable, though...

Expand full comment

Alberto, you and I are so aligned on this topic that I felt like I was reading something I would have written. (If only I had the way with words you do - very well done sir!)

I think I'd just add that these collective decisions we make require much better collective sense making. And we're woefully bad at that right now. Just look at how politically polarized Americans are right now. We can't even agree on basic facts.

The incentive structures of capitalism create rivalrous dynamics which steer people towards deception for differential advantage. (SIDEBAR: My critiques of capitalism do not mean that I propose some other bad system we've tried in the past.) These dynamics lead to disinformation polluting the information space. And it is amplified by social media and the bad incentives that led to Facebook optimizing their algorithm for engagement rather than human wellbeing (as you pointed out).

Basically, I think that in order to create a world worth inheriting for our kids, we have to get our collective heads screwed on straight. I mean it's like we're all on this rocky ship hurtling through space and time and the people steering the ship are so far removed from the average passenger that...

Expand full comment

Hi Alberto. Well, you knew this was coming.... :-)

1) Do you think human beings are gods, creatures of unlimited ability?

2) Do you think knowledge/power development is limited? Will that process go so far, and then just stop?

3) Do you get that knowledge development is feeding back upon itself, leading to an accelerating rate of further development? And do you understand that human maturity and judgement is more or less roughly where it's been for thousands of years?

If you were to chart knowledge development against maturity development on a graph, you'd see the two lines diverging from one another at an accelerating pace. The growing gap between those two lines represents a shrinking of the room for error.

You've read all this before. But I'm afraid you still don't really get it. But, you are in good company in not getting it, a great deal of good company.

It may be helpful to think of knowledge as an element of nature, like sunshine, wind, water, food etc. Having some of these things is not only good, but necessary. But having more without limit is fatal.

Techno-optimists are very bright people, who are stuck in the 19th century.

Expand full comment
author

Hey Phil, I knew it yeah.

1) We are not.

2) Our logistical prowess put a limit to it. Our cognitive abilities put a limit on it. Physical laws put a higher limit on it.

3) Yes, human maturity is pretty much constant because it's inevitably linked to our biological endowment, which hasn't changed in millennia. The accelerating rate of knowledge development also has limits. We can create, absorb, process, and apply only so much knowledge. More papers don't mean we're advancing knowledge-wise. Also, the application of scientific advancements and technological innovation is bottlenecked by our ability to materialize it into society. That's not easy or trivial and definitely not immediate. See my article on Tyler Cowen's arguments for the value of AI for economic growth.

I do get it, but I think I have a more realistic view of how knowledge really advances and how it feeds back upon itself. I am also definitely more optimistic about our ability to control it. If we have nuclear weapons is not because of the accelerating pace of knowledge but because we don't get along easily.

As Bertrand Russell said:

"Before the end of the present century, unless something quite unforeseeable occurs, one of three possibilities will have been realized. These three are: —

1. The end of human life, perhaps of all life on our planet.

2. A reversion to barbarism after a catastrophic diminution of the population of the globe.

3. A unification of the world under a single government, possessing a monopoly of all the major weapons of war."

None of those happened. Did something unforeseeable occur? Not really.

So, why didn't we destroy humanity? Perhaps we have a higher degree of maturity than you ascribe us. Russell was an optimist (I don't think he would have approved of Andreessen's manifesto, though) and still got it wrong even if he had all the reasons to be not very optimistic. He understated our ability to control the outcome of greater knowledge, in this case, materialized as nuclear weapons.

All this is a preamble to say that I agree with you on principle, we may get too far in our search for more knowledge; it may get out of our control. I just wanted to reframe the story to make it more optimistic, because I think we can be. And if we destroy ourselves in the process, well, at least the rest of the planet will thank us for that.

Expand full comment

Hi Alberto,

Didn't we learn more in the 20th century than the previous five centuries put together? Isn't that evidence of an accelerating knowledge development process?

I think we're both optimistic, but perhaps over very different time periods. I see a continuation of the long pattern of civilizations rising and falling, with each cycle typically being somewhat better than the last. It would take an astronomical event to wipe us out, so one way or another we'll keep learning.

I'm also very optimistic about the ultimate big picture of life and death, but that's another subject, too large for this space.

But for this particular civilization what I see is a man with a big loaded gun in his mouth, who is too bored by the gun to bother discussing it most of the time. Not exactly evidence of sanity...

Have a good one sir!

Expand full comment
author

Perhaps too much intelligence implies an endless search for more knowledge in an attempt to fool death only to be destroyed by the sheer amount of knowledge obtained. The other species just play the game as it was intended.

Expand full comment