I don't really know what's happened but TAB has been BUZZING with activity. The sub count has skyrocketed from 13,000 to (almost) 15,000 in one month.
That's 67 of you arriving every. single. day.
So, thank you all!! And welcome!!
Crossing the 15k mark deserves a special treat and I thought I could do another round of Ask Me Anything (or rather, Comment Whatever You Want):
Questions (about me, AI, writing…)
Introductions (tell me about you!)
Improvement suggestions
Topics you'd like me to cover more (or at all)
… Be bold!
Also.
I'm traveling this weekend so I’m doing this AMA from the middle of nowhere, surrounded by green trees, tweeting birds, and the freshness of nature.
Let me share with you a snapshot of this tranquility.
Breathing clean air, disconnecting from the city, and spending a couple of days without anyone mentioning the words “artificial intelligence” will be revitalizing.
See you in no time (schedule as always) with renewed energy and many more stories to tell!
I have invented, IP protected and manufactured functional demonstration devices of a covert, ring wearable, "at distance", smartphone interface device. There is nothing like it out there. It is possible to silently & covertly send
SMS "SOS" with distress message and GPS map, fitness APP, total music (any audio) SMS "SOS" with distress message and GPS map, fitness APP, total music (any audio) interface, cell call interface, and find my phone feature.
My question is, "Is this an edge interface device which you would like to learn more about?" I can send you a demonstration unit?
The Turing test has always been a somewhat vague benchmark since its inception, although I suspect there is more to it than its most simplified expression: having “a human” be unable to distinguish between a machine and a human during their screened off interaction. Obviously when you consider the full range of what constitutes “a human” you’re going to find quite a few naive folks that are easy to fool from the get go. Yeah I think it’s safe to say that the Turing test as it was originally conceived is now crossed off the list of AI accomplishment.
I think it’s time for a new benchmark in AI. We really need a new criteria by which to say, yes this is something that, when accomplished, is a significant milestone in practical intelligence. Thus, I propose the “Rather marginal cup of coffee” test. Have a robot gain access to an entirely randomised and generic kitchen. After having assembled all the materials and equipment required, it must produce and serve what might be deemed a rather shitty cup of coffee (SCC). This should now be a formal benchmark.
People hand-wring about AI becoming AGI, transcending into SGI. My question for you Alberto is, when will we see a robot pass the SCC test?
When it was conceived, the Turing Test represented something inconceivably advanced. It was like throwing down the gauntlet of what was possible with machine-based intelligence. I don't believe it was ever intended to be the finally delineated line that people triy to use it as today.
We can argue about whether it was first past by GPT4, GPT2, or even by Eliza (which fooled many people!), but in any case, it's in the rear view mirror.
I agree the Turing test is imperfect (I also consider it passed, but more importantly obsolete). I remember reading about the coffee test as proposed by Steve Wozniak a while ago. I think we're rather far from AI passing the coffee test just due to the amount of crossmodal perceiving and processing involved. We're advancing fast in a couple of categories that require pattern matching, but I'm not sure that pace of progress can easily be extrapolated to other modalities.
I agree that it is not an easy thing to extrapolate into a future date. Yet I think we do need the SCC for two very important reasons:
1. We need an aspirational benchmark that is very easy to perceive by the general public, and by everyone. It may depart in some respects from Rosie the robot on the Jetsons, but it will be an event in which one year we don’t have it and the next year we finally achieve it. You either get an SCC or you don’t. Eliza probably fooled a lot of very gullible people in the 1970s and 80s, yet this goal is an easier landmark to see.
2. There is so much doomer/,gloomier dystopian expectations of AI/AGI that people really need to get a grip. If they have the terminator on the brain at every mention of AI we can calm them down by pointing to the fact that they can’t even make a shitty cup of coffee in a generic kitchen yet.
I suppose you could make some similar Reality check benchmarks for medical advancements. People get all caught up in the idea that if we achieve any serious regeneration capabilities or augmentation developments, then we’ll automatically have immortality and that will spiral out of control for reasons that only they can imagine with tortured logic. However, I think before we go down that tortured rabbit hole or even notice it over the horizon, we need to see some developments in biomedicine that allow us to grow our hair back and make sure it’s not grey. This is a well grounded advancement on a par with the SCC.
An evident path to AGI (at least for me, LOL) is for transformers to progressively encompass all of our 5 senses and (inner) proprioception. I think Yann Le Cun has a same view, and META is poised to tackle, at last the thorny symbol GROUNDING issue.
Could you cover, please, what's done for the currently neglected modalities (smell and taste)? It's not clear what they would provide for abstract reasoning and general scientific AI guided research but for situational intelligence, e.g. for addressing the mundane housekeeping chores for a robot, they would be invaluable (e.g. when it is time to get rid of some rotting stuff or, a more pleasing perspective for inventing new recipes).
Besides, in the long range, one could speculate on how the universe would be "seen" from a post singularity "transformer" able to fuse modalities from the entire electromagnetic spectrum!
Meanwhile, enjoy your immersion in Nature's intricate foliage, a perpetual source of wonder and inspiration for me.
As far as I know, there's little done with those sensory modalities (I recall some experiment with a robot that could smell - I think it ended in nothing).
There's an important reason for this: language (text, speech), images, audio, video, music, etc. i.e., all the info modalities that have been explored with modern AI models can be easily digitized into data that computers can process.
Smell and taste are the two sensory modalities that don't leverage info hidden in electromagnetic waves or mechanical waves but in chemical stimuli. It's much harder to transform that into computer-readable info.
Hey Frans! I want to do this, yes. Right now neural nets are almost synonymous with AI in the eyes of the world and I have no choice but to play by the rules. But I'm optimistic it won't always be this way. Gemini may be the first example of this in quite a long time.
Thank you Rafe! I've thought about it actually. Many people are much more successful than I am here on Substack. But I think I can provide value in that I'm an anon that managed to grow an audience from nothing. Thanks for the suggestion!
I’m one of those 2000! I’m getting a lot out of your stack, thanks for being rad.
My question is more philosophical: in The Mind Illuminated, by Culadasa and John Yates PhD, the meditation teachers / neuroscientists present a model of consciousness that is extrapolated from the reported commonalities of thousands of years of meditative introspection.
That model is that our mind is made up of competing subminds that all vie to be “I” at any given moment. When I’m meditating, I can sometimes watch these subminds project their distractions into the stage I designate consciousness: the plan making submind, the fantasizing submind, the sensory submind, the language submind etc. Eastern wisdom teaches “there is no Self” just a complex combination of processes that collaborate to drive the body they inhabit away from pain and toward pleasure.
The language submind is particularly interesting in relation to AI. It sometimes seems no less a dj playing with and mixing together other people’s ideas than ChatGPT.
With Google marrying an LLM with the AI that beat that Go champion in Gemini, I am reminded of this complex system of minds that makes up me. Even the way GPT4 is a marriage of 8 models evokes this consciousness model to me. The web searcher and the Python interpreter are also good examples.
It seems to me we already have almost all the tech we need to build a consciousness composed of highly effective subminds!
1. Do you think there’s a flaw in my reasoning?
2. Would you be open to considering AI conscious?
3. At what point does it just *not matter*, and we have to treat AI as the next phase in evolution whether we can prove it is truly sentient or not?
I think there's value in thinking about consciousness at such a high level but I would be careful in trying to reach scientific conclusions from that. The examples you mention from AI are similar in form to what you suggest the mind is made of but not necessarily in substance (we just don't know if it's fair to draw such parallelism).
That's why I feel this statement, "It seems to me we already have almost all the tech we need to build a consciousness composed of highly effective subminds," is too strong a conclusion from the premises.
I'm definitely open to consider AI conscious. But I don't see much value in doing so right now given that we don't know what consciousness is, we don't have tools or tests to arrive at rigorous, reliable, and valid conclusions when trying to assess things' property of consciousness, and we don't even know if we'll ever have them.
For now, consciousness is a question that lives in the domain of philosophy and will remain there in the foreseeable future.
Your last question is probably the most interesting to me! Because, as I see it, it's the scenario we may eventually face in the real world. I don't have an answer for that, but it's worth thinking about it!
Congrats man on the growth - well deserved. Way to go! Cheers, Veith
Thanks Veith :)
Hi Alberto,
I have invented, IP protected and manufactured functional demonstration devices of a covert, ring wearable, "at distance", smartphone interface device. There is nothing like it out there. It is possible to silently & covertly send
SMS "SOS" with distress message and GPS map, fitness APP, total music (any audio) SMS "SOS" with distress message and GPS map, fitness APP, total music (any audio) interface, cell call interface, and find my phone feature.
My question is, "Is this an edge interface device which you would like to learn more about?" I can send you a demonstration unit?
The Turing test has always been a somewhat vague benchmark since its inception, although I suspect there is more to it than its most simplified expression: having “a human” be unable to distinguish between a machine and a human during their screened off interaction. Obviously when you consider the full range of what constitutes “a human” you’re going to find quite a few naive folks that are easy to fool from the get go. Yeah I think it’s safe to say that the Turing test as it was originally conceived is now crossed off the list of AI accomplishment.
I think it’s time for a new benchmark in AI. We really need a new criteria by which to say, yes this is something that, when accomplished, is a significant milestone in practical intelligence. Thus, I propose the “Rather marginal cup of coffee” test. Have a robot gain access to an entirely randomised and generic kitchen. After having assembled all the materials and equipment required, it must produce and serve what might be deemed a rather shitty cup of coffee (SCC). This should now be a formal benchmark.
People hand-wring about AI becoming AGI, transcending into SGI. My question for you Alberto is, when will we see a robot pass the SCC test?
When it was conceived, the Turing Test represented something inconceivably advanced. It was like throwing down the gauntlet of what was possible with machine-based intelligence. I don't believe it was ever intended to be the finally delineated line that people triy to use it as today.
We can argue about whether it was first past by GPT4, GPT2, or even by Eliza (which fooled many people!), but in any case, it's in the rear view mirror.
I agree the Turing test is imperfect (I also consider it passed, but more importantly obsolete). I remember reading about the coffee test as proposed by Steve Wozniak a while ago. I think we're rather far from AI passing the coffee test just due to the amount of crossmodal perceiving and processing involved. We're advancing fast in a couple of categories that require pattern matching, but I'm not sure that pace of progress can easily be extrapolated to other modalities.
I agree that it is not an easy thing to extrapolate into a future date. Yet I think we do need the SCC for two very important reasons:
1. We need an aspirational benchmark that is very easy to perceive by the general public, and by everyone. It may depart in some respects from Rosie the robot on the Jetsons, but it will be an event in which one year we don’t have it and the next year we finally achieve it. You either get an SCC or you don’t. Eliza probably fooled a lot of very gullible people in the 1970s and 80s, yet this goal is an easier landmark to see.
2. There is so much doomer/,gloomier dystopian expectations of AI/AGI that people really need to get a grip. If they have the terminator on the brain at every mention of AI we can calm them down by pointing to the fact that they can’t even make a shitty cup of coffee in a generic kitchen yet.
I suppose you could make some similar Reality check benchmarks for medical advancements. People get all caught up in the idea that if we achieve any serious regeneration capabilities or augmentation developments, then we’ll automatically have immortality and that will spiral out of control for reasons that only they can imagine with tortured logic. However, I think before we go down that tortured rabbit hole or even notice it over the horizon, we need to see some developments in biomedicine that allow us to grow our hair back and make sure it’s not grey. This is a well grounded advancement on a par with the SCC.
An evident path to AGI (at least for me, LOL) is for transformers to progressively encompass all of our 5 senses and (inner) proprioception. I think Yann Le Cun has a same view, and META is poised to tackle, at last the thorny symbol GROUNDING issue.
Could you cover, please, what's done for the currently neglected modalities (smell and taste)? It's not clear what they would provide for abstract reasoning and general scientific AI guided research but for situational intelligence, e.g. for addressing the mundane housekeeping chores for a robot, they would be invaluable (e.g. when it is time to get rid of some rotting stuff or, a more pleasing perspective for inventing new recipes).
Besides, in the long range, one could speculate on how the universe would be "seen" from a post singularity "transformer" able to fuse modalities from the entire electromagnetic spectrum!
Meanwhile, enjoy your immersion in Nature's intricate foliage, a perpetual source of wonder and inspiration for me.
Kind regards,
Sasskia
Thanks for the important question Sasskia!
As far as I know, there's little done with those sensory modalities (I recall some experiment with a robot that could smell - I think it ended in nothing).
There's an important reason for this: language (text, speech), images, audio, video, music, etc. i.e., all the info modalities that have been explored with modern AI models can be easily digitized into data that computers can process.
Smell and taste are the two sensory modalities that don't leverage info hidden in electromagnetic waves or mechanical waves but in chemical stimuli. It's much harder to transform that into computer-readable info.
Gracias, Alberto. La filosofía de este boletín va de maravilla, ¡excelente!
Gracias a ti por leerme :)
Hello Alberto, can you cover more stories about next generation AI that employs new tech (other than ANN). What are other ways to build better AI?
Hey Frans! I want to do this, yes. Right now neural nets are almost synonymous with AI in the eyes of the world and I have no choice but to play by the rules. But I'm optimistic it won't always be this way. Gemini may be the first example of this in quite a long time.
Congrats, Alberto! Well deserved...
Perhaps you can collect your experience in growing TAB and offer it as a course, don't you think?
Enjoy your surroundings!
Thank you Rafe! I've thought about it actually. Many people are much more successful than I am here on Substack. But I think I can provide value in that I'm an anon that managed to grow an audience from nothing. Thanks for the suggestion!
I’m one of those 2000! I’m getting a lot out of your stack, thanks for being rad.
My question is more philosophical: in The Mind Illuminated, by Culadasa and John Yates PhD, the meditation teachers / neuroscientists present a model of consciousness that is extrapolated from the reported commonalities of thousands of years of meditative introspection.
That model is that our mind is made up of competing subminds that all vie to be “I” at any given moment. When I’m meditating, I can sometimes watch these subminds project their distractions into the stage I designate consciousness: the plan making submind, the fantasizing submind, the sensory submind, the language submind etc. Eastern wisdom teaches “there is no Self” just a complex combination of processes that collaborate to drive the body they inhabit away from pain and toward pleasure.
The language submind is particularly interesting in relation to AI. It sometimes seems no less a dj playing with and mixing together other people’s ideas than ChatGPT.
With Google marrying an LLM with the AI that beat that Go champion in Gemini, I am reminded of this complex system of minds that makes up me. Even the way GPT4 is a marriage of 8 models evokes this consciousness model to me. The web searcher and the Python interpreter are also good examples.
It seems to me we already have almost all the tech we need to build a consciousness composed of highly effective subminds!
1. Do you think there’s a flaw in my reasoning?
2. Would you be open to considering AI conscious?
3. At what point does it just *not matter*, and we have to treat AI as the next phase in evolution whether we can prove it is truly sentient or not?
Deep question! Thanks Geoffe.
I think there's value in thinking about consciousness at such a high level but I would be careful in trying to reach scientific conclusions from that. The examples you mention from AI are similar in form to what you suggest the mind is made of but not necessarily in substance (we just don't know if it's fair to draw such parallelism).
That's why I feel this statement, "It seems to me we already have almost all the tech we need to build a consciousness composed of highly effective subminds," is too strong a conclusion from the premises.
I'm definitely open to consider AI conscious. But I don't see much value in doing so right now given that we don't know what consciousness is, we don't have tools or tests to arrive at rigorous, reliable, and valid conclusions when trying to assess things' property of consciousness, and we don't even know if we'll ever have them.
For now, consciousness is a question that lives in the domain of philosophy and will remain there in the foreseeable future.
Your last question is probably the most interesting to me! Because, as I see it, it's the scenario we may eventually face in the real world. I don't have an answer for that, but it's worth thinking about it!
No questions just wanted to say I appreciate your stack, enjoy.
Thanks Joe :)
Very AI nice:)
Congrats, Alberto! That's amazing.
Thanks Andrew!!
That's your best AI art yet Alberto. Extraordinarily realistic! :)
How do you know it was created by AI?
That was the joke. Was pretty sure it was a good old fashioned photograph - like in days of old.
Haha! You're totally right ;)