One thing I would recommend adding about AGI, is its ability to learn in real time. It should be able to assess quality of information and add it to its knowledge base if it is found useful, or at least add or modify the dimensions of a token embedding. Also it should be able to learn a procedure and create/modify an appropriate agent as needed. In fact, these agents or RNNS should be tokenized and be called upon as needed. (Most) humans learn throughout their lifetimes, and so should a true artificial intelligence.
Great article, I enjoyed it a lot! It is essential to sometimes see backwards and appreciate all these predictions.
Many people use Sam Altman's or Elon Musk's claims as scientifically supported, but I never can avoid that what they want is their company to be successful and leaf the future. That's very different in my opinion from Hinton or Yann LeCun, for the moment, and I rely much more on them.
I think also that a solid human intelligence theory is essential to give a big step forward on AGI. Hector Levesque books on common sense is one of my favourites.
I love this! It takes bravery to even read what you wrote 3.5 years ago, and especially to engage with it honestly.
One thing I would recommend adding about AGI, is its ability to learn in real time. It should be able to assess quality of information and add it to its knowledge base if it is found useful, or at least add or modify the dimensions of a token embedding. Also it should be able to learn a procedure and create/modify an appropriate agent as needed. In fact, these agents or RNNS should be tokenized and be called upon as needed. (Most) humans learn throughout their lifetimes, and so should a true artificial intelligence.
Can AI summarize this for me? Read this long article gives me a headache.
Why read it then?
Great article, I enjoyed it a lot! It is essential to sometimes see backwards and appreciate all these predictions.
Many people use Sam Altman's or Elon Musk's claims as scientifically supported, but I never can avoid that what they want is their company to be successful and leaf the future. That's very different in my opinion from Hinton or Yann LeCun, for the moment, and I rely much more on them.
I think also that a solid human intelligence theory is essential to give a big step forward on AGI. Hector Levesque books on common sense is one of my favourites.
> We may never achieve artificial general intelligence
Yes, we will. And that´s a reason to quit.