Aug 13, 2022Liked by Alberto Romero

>> Do you think you could, as an individual, get around it and develop the skills needed in an AI-

>> driven industry?

I am not sure that "skills" is the right word here. There is a *huge* category of jobs that will never -- by "never" I mean for the rest of this century -- be replaced by AI no matter how good -- how "skillful" -- the technology gets. Those are jobs that have as an essential element the requirement that their practitioner be a human being. (In a sense the job *is* being a human.) AI will never replace Rabbis or anyone else with a spiritual function. (I admit the existence of prayer wheels does mystify me a bit.) It will replace a few therapists but not most, and certainly nobody practicing alternative therapies like acupuncture or homeopathy or Reiki. If you eat in a expensive restaurant your waiter will be a human (most restaurants will be automated, but not the expensive ones). There are a lot of jobs where the parties buying the service like to feel they are interacting with someone with the same history -- someone who went to high school and worried about their popularity and got married and had kids and got divorced -- and are willing to pay for that qualification. In the future all jobs will require that those filling them be humans. And there will be plenty of them.

Expand full comment

Hi Fred, I agree that some jobs are inherently made for humans, like those you mention. But what about all the others (the vast majority?)

Expand full comment
Aug 19, 2022·edited Aug 19, 2022

It's an open question. General AI will give us some money to spend. How will we spend it?

Suppose restaurants split into two sectors: one that is fully automated and one with human

waiters. When you just want a quick meal you will go to the first. Which will you go to when you want that meal to be something special? How often will that be? Suppose you are running a street business and the culture thinks that businesses like yours need to have a greeter. Will you hire a human or a robot? If having a human being as a greeter makes your business "real" in some sense approved by the culture, if it is a positive sign, you will hire one. Maybe more than one. If you need someone to walk your dog you might well hire a human being, on the ground that you think your dog prefers humans. Like I say, an open question. But I cannot overlook the fact that lots and lots of human activities have been automated over the last few centuries and so far the net number of jobs created has been overwhelmingly positive.

Technology gives us money to spend and you can't do much with money other than give it to a another human. General AI is no different from the wheel.

Expand full comment

I'm not sure I am qualified to answer, nor that I'm answering in the correct fashion, but here's my two cents:

"Which will you go to when you want that meal to be something special?"

How about you cook at home? But that's facetious, because I omitted my opinion that a kitchen in every home is immensely wasteful - in so many ways - and also oh, so Western!

I'm not sure how Alberto feels about the present climate situation, but I am of the opinion that doomsday is in the past and the entire planet is decaying into some unknowable, but pretty near, apocalyptic future.

I also happen to think that the statement "Technology gives us money to spend" unless better qualified requires some serious justification. In fact, I'm not willing to accept absolutely any of it as fact. Money may buy us technology; technology may give us products to spend money on. But whereas technology is neutral, money ain't.

Expand full comment

Gracious. I cannot guess what you might be thinking. To me, the statement "Technology gives us money to spend" is as obvious as statements get. It might not be the sole rationale for adopting any given item of technology -- my processor doesn't make me money -- but it is clearly pretty basic, and especially relevant given the themes of this site. When you try to think about the long-term effect of AI on employment the two propositions "the only thing you can do with money is give it to another human being" and "almost the only reason ever to automate anything is because it makes you money" are pretty central. They tell you that there is no deep reason why we can't handle the consequences of this technology.

We still might fuck it all up, of course.

Expand full comment

"I cannot guess what you might be thinking."

Clearly. So when the Dutch masters explored new paint methods to produce oil paintings - I may have this wrong, historically, but the theme is valid - they were doing it so they could enrich themselves and not to produce paintings that better expressed their perception of the subject matter?

And when some friends and I dabbled with UUCP-based email at the end of the 1980s and promoted it across South Africa, it was so we could spend more money?

Technology has clearly been monetised and both end and means distorted as a result.

I personally believe that technology can be liberating - I mentioned that it is neutral, but it always serves the objective of enhancing human "performance". Making (more) money is only necessary if you don't have enough. Creating the illusion that one's income does not suffice is hardly a worthy goal to aim technology at, although it seems that it is ALL that technology is being bent towards.

It is unfortunate that this is taking us away from the questions Alberto posed. In my opinion, AI extends beyond taking over from humans the many chores we are saddled with. I recall the introduction of domestic appliances to reduce domestic chores and I was young enough then to welcome them to my lower-class home and neighbourhood. Today, I can see how they did not carry the warning label that the free time housewives would be granted would be spent labouring to enrich the employer classes.

Likewise, rather than improve diagnosis and cures, sophisticated pattern recognition tools are used to accelerate the production of poisonous and/or addictive drugs and alternatives to more permanent medications such as vaccines. And instead of providing a more sophisticated network of public transport restricted to one-way traffic like railway lines, we create self-driving vehicles that perpetuate the single greatest cause of accidents: the two directions of traffic that characterise the universal road system.

Among all that, there will be AI engineers and architects that produce original designs and implementations, together with the considerable amount of legendary inventions that have populated the popular imagination and continue to feed myths at the core of conspiracy theories.

I guess you'll be with me on the front line to ensure that such anti-establishment designs continue to "give us money to spend" rather than be suppressed?

Expand full comment

My interest -- so far as this group is concerned -- is focused on the widespread projection that artificial general intelligence is going to disemploy lots of people; in some cases, potentially everybody.

If you were given even odds on the proposition, which way would you bet? While anything might happen, I would bet against for the two reasons I gave. Reason One: AGI is going to release lots of resources -- as you point out, that is far from the only thing it will do, but it will do that. For instance, when self-driving trucks finally arrive, maybe by the end of this decade, transportation costs will fall considerably. Next question is: what will the people who were paying those higher costs do with their savings? Obviously I can't say in any detail but the only thing you or anyone can do with money is give it to another human being. (If you don't count burning it in your back yard).

So: will 1) and 2) balance out?? I can't see any reason why it's not possible, which is why I would bet the way I would. After all, over the last several millenia, whatever else technology has made possible, it has certainly kept employment pretty high despite considerable increases in population I don't see any deep reasons why artificial general intelligence should behave any differently.

Expand full comment
Aug 12, 2022Liked by Alberto Romero

UBI was the dream of 1968's "Autunno caldo" in Europe. Don't discount it, but don't hand it to the profiteers, they'll find ways to use it to enslave you. The South African experience of the past thirty years shows exactly what power politicians can wield over those that were "liberated".

In my opinion, AI is the only discipline that can create the financial stratum on which UBI can be based. But the condition is that greed intended to increase inequality is first revealed as destructive and, like other emotions, humans learn to control it.

AI can of course, also be designed to be greedy. And they can also learn that from human examples. Try to imagine that!

Expand full comment

My day job is hardware design for electrical engineering. AI could threaten my job, but such a turn of events is beyond the horizon - perhaps beyond a 'singularity.' So much else will have changed by that point that the world will already have changed beyond recognition.

AI is just starting to appear in areas such as PCB design. Probably "AI" is already in use for IC design - just guessing. UBI is a non sequitur.

"Life after work" was earlier proposed for a then-future mature industrial economy. It has been possible for decades. Literally. But somehow we devolved into a ... "neo feudal" age? A few have struck it rich in this generation. But a more common question in the Information Age is how to un-tether oneself at all. Many don't even have their own life after 5:00pm, which used to be a standard shift only one generation ago. One may have 'free' time at any time of the day, yet no completely free time ever.

Expand full comment

"But a more common question in the Information Age is how to un-tether oneself at all."

That's a good question. And also, how can we find solutions at the individual and also at the collective levels.

Expand full comment

This is indeed a good question, as is the subsequent: is such un-tethering desirable or even possible? The scope of education, for example, is much greater today than it was when I attended school and providing learners with calculators so they don't need to learn multiplication tables is hardly going to make up the difference. Is AI the mechanism that supplies missing knowledge (and understands poorly formulated questions) when no experts have the breadth of knowledge demanded?

And is un-tethering not going to create more social classes that step beyond one's ability to cross their boundaries? My personal ideology allows for social classes to serve a useful purpose, but on condition that the ring-fencing not be impermeable.

Lastly, something that has been bugging me for quite a while. As I see it, humans as a species (biological, but nevertheless mechanical) are reputed to be on the way to be overcome by AI "products" of new technologies. There is some logic to that, it seems quite possible for a greater capacity to "reason" than that of the most "intelligent" of humans to be embedded in some physical organism or machine. But for "overcoming" to happen, it requires a desire or need to add - I admit I don't have the philosophical background to state this authoritatively - the emotional or moral concept of "better" into the equation. Isn't that something AI architects are not compelled to introduce? In other words, could we just draw a clear line between competing and cooperating that neither humans nor automata would feel a need to cross?

In that light, the social media and related technologies' "algorithms" seem to me to be dangerously heading in the exact opposite direction.

Again, I am an amateur and not a very knowledgeable one. I'm sure that a moderate knowledge of Asimov's "I, Robot" and "The Rest of the Robots" is inadequate background.

Expand full comment

I am an academic. I see great potential for AI to replace standard teaching--in fact I could see it replace it almost entirely with the advent of LLM with APIs. I learned my first programming language LISP from an experimental CMU interactive system and loved the self pacing aspect. I enjoy teaching and see a role for inspiration and motivation by teachers, as well as guidance of projects, but most of the teaching can be automated especially at third level undergraduate, not postgrad and PhD in particular. I am also a researcher. I always have had a fascination with automation, starting with automated theorem proving and later on research on program synthesis. Needless to say the GPT developments on the coding side gave me pause for thought. I believe that ultimately research will greatly be aided by AI but not quickly replaced. Definitely neural networks capture an aspect of research--creating models of a complex reality, even though it is at the heart numerical analysis, ie approximations via functions so still limited in capturing intelligence.

Expand full comment