Just so you don’t forget that TAB exists during the holidays, here’s the open thread I promised: I want you to exercise your ability to influence the direction of this newsletter content-wise for 2023.
Comment below what you’d like to see more of (or less of) on TAB. New topics you’d like me to explore, deeper analyses of older ideas, things I’ve touched on too much, etc. Any suggestion is welcome—I promise to read everything.
(I know some of you have suggested topics already. I keep those in mind but feel free to comment again here so I have everything in one place!)
Alberto, I would enjoy seeing more discussion about the immediate, practical application of AI. While debate rages over a lot of philosophical ideas (e.g. AGI, alignment), there are immediate benefits for businesses and individuals. Some of your focus has been on the disruption of particular professions. I’d be curious to get your take on which industry sectors and established players are ripe for disruption. Thanks and have a happy new year.
I plan to do more research on this (and similar topics) to avoid neglecting the useful/practical/immediate side in favor of the exclusively-interesting side. I want to find a healthy balance between actionable and analytical articles.
Here’s an interesting article that speculates on Google’s response to the “LLM wars”. Would be interesting to hear your thoughts on how Google, MSFT/OpenAI and StabilityAI move forward with different approaches to leveraging LLM’s for search, chat, and other advanced capabilities. https://9to5google.com/2022/12/23/google-ai-chatgpt/
For me the most useful is always knowing the SOTA on usable (API or open source models) fine-tuning and domain adaptation of LLMs. So I can apply them to real world use cases and build my own models, like dreambooth in SD.
The other thing is LLMs applied to robotics and other fields, because its a much simpler framework than previous ones and will revolutionize everything. Understanding how to apply LLMs to things that are not obvious, like NVIDIA and Google did with robots by encoding the robot control as text and just having multimodal inputs of instructions and videos. What else can we apply them to, especially when training or fine-tuning becomes cheaper?
Thanks Tiago. We'll see much more on the latter next year, I'm sure. And about the former, as I responded to Dan, I agree--I'll try to give more presence to actionable/useful articles.
The weightiest problem facing generative AI is the tolerance if not the actual creation of errors. If that could be fixed, it would become possible, just to begin, for consumer service and technical support, currently two huge sources of frustration if not misery to us all, to make real contributions to our lives. But how to do that? How to build a chatGPT that can do fact-checking? Until that happens this will be a marginal technology. Still important, but marginal. But the answer is obscure, at least to me. I would love to see a discussion of how to attack this problem.
Something I would like to see is a revisit of the OpenAI products. I'm using products loosely here since they are not for sale per say. It seems people are confused about all the terms. For example GPT-3 versus ChatGPT, InstructGPT, OpenAI API ... A block diagram of these with a bit of history would be nice to see visually. Then maybe a next level digram of what are the basic components in each GPT-3 say versus ChatGPT. You get the idea.
Thanks Tom, interesting idea. I think many people could benefit from this. More so when GPT-4 comes out and more and more people begin to notice OpenAI's research developments.
Alberto, would like to add a late entry into the mix. You may be familiar with the Pessimist’s Archive, a popular twitter account that chronicles past hysteria over new technologies and social trends. Viewed through this lens, these concerns appear silly and comical. One of the major issues for lay folks regarding AI is related. Are we dealing with a “this time things are different” phenomenon? Is AI simply another step in technological progress that advances the human condition (albeit with some downsides)? Or is it a radically different advancement that poses unique threats and greater potential downsides? Would enjoy a post where you assess this and provide your perspective.
Very interesting topic. I've given it a lot of thought and depending on how I frame it my answer changes. It'd be great to compile everything into a single essay. Thanks for the suggestion!
Hi Alberto, I’m intrigued by the different narratives I’ve seen in the past months around autonomous driving, would you be able to provide some clarity here?
On one hand there have been several companies that recently shuttered or pared down efforts to develop self-driving technology, driving a downward narrative. But on the other hand Waymo and Cruise are expanding their fully driverless services in some busy American cities (albeit slowly), with Waymo announcing 24/7 service in SF as of two days ago.
Ignoring marketing speak for a second, and some of the early snafus, looking at some recent articles and reviews online these do really seem to be working well. Am I missing something and are these services not what they seem? Or is there something else that's causing these different narratives?
Hi Eric, I plan to come back to autonomous driving exactly because of this. The discourse has changed a bit in the past months. A bit of clarification would be great. Thanks for the suggestion!
Hi. I not not sure if this is a suitable topic here. But I am interested to understand more about the hardware/software optimisation with respect to AI. For example, the main hardware being used now is GPU. But if AI model going to be relatively stable, maybe the industry should be moving towards ASIC? If radically new AI model is going to ap appear frequently, maybe we should be looking at FPGA. Also, we can find ways to efficiently shrink AI model and yet maintain accuracy, maybe we should just use CPU with AI accelerators
On the other hand, what programming software will be dominant and how does it impact AI development. E.g, will it still be CUDA mainly or industry will be more receptive to OpenML, OpenVINNO etc. And how does all this interplay with hardware and choice of AI model in a second or third degree order etc.
Currently, the size of AI model has made it very expensive to train and will need to significantly increase memory bandwidth as well. So I guess these hardware and software consideration might become important.
What big problems could AI solve for us that we haven’t been able to solve ourselves and why haven’t we been able to solve them. For example, will Ai expand our understanding of physics?
I'm also curious to understand InstructGPT. What is the status and direction in 2023 for crafting your own knowledge specific additions.
Or what about pros and cons of interfacing these tools with other systems like IBM Watson, or Wolfram Alpha.
When will ChatGPT understand what date and time and when will it become more real time.
One further random note that I don't see mentioned. Maybe this is just a pet peeve of mine but as useful as Google can be, one the most annoying aspects of trying to find an answer in Google is dealing with what I can "KEYHOLE Content" my name for it. I'm creating a post on this. I'm not against advertising. I get the purpose. But honestly, don't you feel content producers have gone overboard on slapping ads into content with floaters, popups and various widgets and things that constantly attempt to get your attention while you're trying to read, and comprehend, 800 words of content. Maybe it's just me.
I would be interested in explorations of the brittleness of AI: for instance the way in which AlphaGo was beaten recently by a relatively simple attack. Also (already raised) approaches to address wrong answers. I am very interested in automating software development and the automated creation of efficient code in particular. This is more niche but ties in with improving current approaches. I would love to read more on this and also on different approaches than deep learning or approaches that use deep learning in novel ways. Possibly intelligence can be improved by building hierarchic systems of networks incorporating some compositionality. I am not so much interested in speculation on these topics but would avidly read discussions of genuine progress on any of these fronts.
At what stage will we see AI in the C-suite? When and how will AI assist or replace in governance decisions? How will AI improve Executive decision making and who is working on this?
Also - policy development - can/will Ai help us make better policy decisions or will AI be prevented by vested interests from influencing evidenced-based policy making ?
Now-2030 AI will bring significant changes and disruptions in society. Job automation will unleash abundance
- What jobs will get automated/demonetized?
- What will be possible for the masses that currently isn't?
- What should education teach?
- What new industries will this create?
2030-2040 As AI gets more powerful, the value of non-connected human brain will diminish. It will be necessary for individuals to merge with the technology through neural implants in order to continue making a meaningful contribution to society.
2040+ Humans starts to live like retired people/domesticated cats
- What is the meaning of life if it is not contributing to society?
Alberto, I would enjoy seeing more discussion about the immediate, practical application of AI. While debate rages over a lot of philosophical ideas (e.g. AGI, alignment), there are immediate benefits for businesses and individuals. Some of your focus has been on the disruption of particular professions. I’d be curious to get your take on which industry sectors and established players are ripe for disruption. Thanks and have a happy new year.
Thank you Dan, this is important!
I plan to do more research on this (and similar topics) to avoid neglecting the useful/practical/immediate side in favor of the exclusively-interesting side. I want to find a healthy balance between actionable and analytical articles.
Here’s an interesting article that speculates on Google’s response to the “LLM wars”. Would be interesting to hear your thoughts on how Google, MSFT/OpenAI and StabilityAI move forward with different approaches to leveraging LLM’s for search, chat, and other advanced capabilities. https://9to5google.com/2022/12/23/google-ai-chatgpt/
For me the most useful is always knowing the SOTA on usable (API or open source models) fine-tuning and domain adaptation of LLMs. So I can apply them to real world use cases and build my own models, like dreambooth in SD.
The other thing is LLMs applied to robotics and other fields, because its a much simpler framework than previous ones and will revolutionize everything. Understanding how to apply LLMs to things that are not obvious, like NVIDIA and Google did with robots by encoding the robot control as text and just having multimodal inputs of instructions and videos. What else can we apply them to, especially when training or fine-tuning becomes cheaper?
Thanks Tiago. We'll see much more on the latter next year, I'm sure. And about the former, as I responded to Dan, I agree--I'll try to give more presence to actionable/useful articles.
The weightiest problem facing generative AI is the tolerance if not the actual creation of errors. If that could be fixed, it would become possible, just to begin, for consumer service and technical support, currently two huge sources of frustration if not misery to us all, to make real contributions to our lives. But how to do that? How to build a chatGPT that can do fact-checking? Until that happens this will be a marginal technology. Still important, but marginal. But the answer is obscure, at least to me. I would love to see a discussion of how to attack this problem.
Super important Fred! I'd love to see AI people talk about this more. It's a topic I'm very concerned with, as you know!
Something I would like to see is a revisit of the OpenAI products. I'm using products loosely here since they are not for sale per say. It seems people are confused about all the terms. For example GPT-3 versus ChatGPT, InstructGPT, OpenAI API ... A block diagram of these with a bit of history would be nice to see visually. Then maybe a next level digram of what are the basic components in each GPT-3 say versus ChatGPT. You get the idea.
Thanks Tom, interesting idea. I think many people could benefit from this. More so when GPT-4 comes out and more and more people begin to notice OpenAI's research developments.
Alberto, would like to add a late entry into the mix. You may be familiar with the Pessimist’s Archive, a popular twitter account that chronicles past hysteria over new technologies and social trends. Viewed through this lens, these concerns appear silly and comical. One of the major issues for lay folks regarding AI is related. Are we dealing with a “this time things are different” phenomenon? Is AI simply another step in technological progress that advances the human condition (albeit with some downsides)? Or is it a radically different advancement that poses unique threats and greater potential downsides? Would enjoy a post where you assess this and provide your perspective.
Very interesting topic. I've given it a lot of thought and depending on how I frame it my answer changes. It'd be great to compile everything into a single essay. Thanks for the suggestion!
Hi Alberto, I’m intrigued by the different narratives I’ve seen in the past months around autonomous driving, would you be able to provide some clarity here?
On one hand there have been several companies that recently shuttered or pared down efforts to develop self-driving technology, driving a downward narrative. But on the other hand Waymo and Cruise are expanding their fully driverless services in some busy American cities (albeit slowly), with Waymo announcing 24/7 service in SF as of two days ago.
Ignoring marketing speak for a second, and some of the early snafus, looking at some recent articles and reviews online these do really seem to be working well. Am I missing something and are these services not what they seem? Or is there something else that's causing these different narratives?
Thank you and a happy new year!
Hi Eric, I plan to come back to autonomous driving exactly because of this. The discourse has changed a bit in the past months. A bit of clarification would be great. Thanks for the suggestion!
HNY!
Hi. I not not sure if this is a suitable topic here. But I am interested to understand more about the hardware/software optimisation with respect to AI. For example, the main hardware being used now is GPU. But if AI model going to be relatively stable, maybe the industry should be moving towards ASIC? If radically new AI model is going to ap appear frequently, maybe we should be looking at FPGA. Also, we can find ways to efficiently shrink AI model and yet maintain accuracy, maybe we should just use CPU with AI accelerators
On the other hand, what programming software will be dominant and how does it impact AI development. E.g, will it still be CUDA mainly or industry will be more receptive to OpenML, OpenVINNO etc. And how does all this interplay with hardware and choice of AI model in a second or third degree order etc.
Currently, the size of AI model has made it very expensive to train and will need to significantly increase memory bandwidth as well. So I guess these hardware and software consideration might become important.
Thanks
This topic is very interesting but a bit far from my expertise. I'd love to know more though, so I'll probably spend some time researching this!
What big problems could AI solve for us that we haven’t been able to solve ourselves and why haven’t we been able to solve them. For example, will Ai expand our understanding of physics?
Very interesting topic Charlotte! There's *a lot* to say about this.
I'm also curious to understand InstructGPT. What is the status and direction in 2023 for crafting your own knowledge specific additions.
Or what about pros and cons of interfacing these tools with other systems like IBM Watson, or Wolfram Alpha.
When will ChatGPT understand what date and time and when will it become more real time.
One further random note that I don't see mentioned. Maybe this is just a pet peeve of mine but as useful as Google can be, one the most annoying aspects of trying to find an answer in Google is dealing with what I can "KEYHOLE Content" my name for it. I'm creating a post on this. I'm not against advertising. I get the purpose. But honestly, don't you feel content producers have gone overboard on slapping ads into content with floaters, popups and various widgets and things that constantly attempt to get your attention while you're trying to read, and comprehend, 800 words of content. Maybe it's just me.
I would be interested in explorations of the brittleness of AI: for instance the way in which AlphaGo was beaten recently by a relatively simple attack. Also (already raised) approaches to address wrong answers. I am very interested in automating software development and the automated creation of efficient code in particular. This is more niche but ties in with improving current approaches. I would love to read more on this and also on different approaches than deep learning or approaches that use deep learning in novel ways. Possibly intelligence can be improved by building hierarchic systems of networks incorporating some compositionality. I am not so much interested in speculation on these topics but would avidly read discussions of genuine progress on any of these fronts.
At what stage will we see AI in the C-suite? When and how will AI assist or replace in governance decisions? How will AI improve Executive decision making and who is working on this?
Also - policy development - can/will Ai help us make better policy decisions or will AI be prevented by vested interests from influencing evidenced-based policy making ?
My current prediction and open thoughts:
Now-2030 AI will bring significant changes and disruptions in society. Job automation will unleash abundance
- What jobs will get automated/demonetized?
- What will be possible for the masses that currently isn't?
- What should education teach?
- What new industries will this create?
2030-2040 As AI gets more powerful, the value of non-connected human brain will diminish. It will be necessary for individuals to merge with the technology through neural implants in order to continue making a meaningful contribution to society.
2040+ Humans starts to live like retired people/domesticated cats
- What is the meaning of life if it is not contributing to society?