32 Comments
User's avatar
⭠ Return to thread
Mon Ski's avatar

Is this article some sort of satire, mocking journalists who express heavy opinions going against the grain out of appetite for a spotlight on the oh-so-multifaceted thought and analysis going on in their minds? I hope so. If you have any sort of grasp on neuroscience and/or fundamental understanding of the principles of modern artificial intelligence and the emergent capabilities of state-of-the-art models it can not possibly escape you that we are talking about a massive, era-defining shift in human affairs and economics. People are getting "bored" with AI??? Are we all twelve?! The unprecedented potential and power of this technology is not based on hype, but the fundamental realization that models have now achieved a scale where - via the phenomenon of emergence - we are for the first time in history reaching the unshakeable realization that the engine which created our own civilization, i.e. intelligence, will be a highly scalable commodity in the very near future. That is, this is no longer a matter of science or philosophy. It is a matter of engineering! That's the state of generative AI in 2024, and I am struggling to understand how you can write an article on the subject with doubts and pros and cons of the nature contained herein.

Expand full comment
Alberto Romero's avatar

Can you repeat the comment without using ChatGPT?

Expand full comment
Mon Ski's avatar

Alberto, my appreciation of the state of AI doesn't mean I have no intelligence of my own. Thanks for checking though.

Expand full comment
Alberto Romero's avatar

Saying "this is no longer a matter of science" reveals that you, indeed, have no intelligence. No wonder you can't wait for it to be a commodity!

Expand full comment
Mon Ski's avatar

Here's what I can tell you - the fact that Ezra Klein can't figure out how to use AI in his daily work is solely the problem of Ezra Klein. Even with my lack of intelligence, I have somehow landed a job in cancer research, specifically within the field of tumor immunology. In this field, as in many others, AI is able to provide invaluable insight - via pulling vast amounts of research - towards integrating multiple biomarkers across studies into coherent mechanistic models of cancer biology which are already accelerating the discovery of new therapies. In the very near future, synergy between AI and essential academic personnel will be mandatory for any lab hoping to have a competitive standing for grant funding. When I say "this is no longer a matter of science" what I mean is that even at the current state of affairs, we have achieved an architecture displaying intelligence capabilities very similar in effect to that of specialists across many domains. Further iterations of massive LLMs, whose inner workings are quite similar to the learning principles in the brain, are likely to achieve symbolic and abstract reasoning that will be able to productively delve into advanced math subjects, like topology, number theory, etc. (search matrix multiplication and DeepMind for some impressive results already here).

Expand full comment
Alberto Romero's avatar

"LLMs, whose inner workings are quite similar to the learning principles in the brain." Really, you don't know what you're talking about lol. Not even people at the vanguard of mechanistic interpretability research know how neural networks work. How can you know they work like the brain (which, by the way, we also don't fully know how it works)? Let me tell you what I think: you just read Leopold's Situational Awareness essay and were mindblown haha.

Expand full comment
Mon Ski's avatar

Actually, I hadn't but it very much seems on point haha

As far as knowing "how neural networks work" is concerned - I think we are talking about different things. We very much know how neural networks work, and neuromorphic architectures specifically (check Intel's Loihi) closely mimic neuronal message passing although they're still lacking in defined learning algorithms similar to the way backprop functions for traditional NNs. What is surprising (but it should not have been in my opinion) is the emergent capabilities of NNs on certain tasks as their complexity passes fuzzy thresholds. In addition (and as it turns out in agreement with Mr. Aschenbrenner's essay), people working at OpenAI and DeepMind have a lot to show for their work. What do the AGI sceptics have to show on the other side? Rule based bs from the 80s? The reason why I'm stuck on your article is that I think it's important for people to realize the magnitude of what's about to hit us instead of opening up debates on whether AI is getting boring or not.

Expand full comment
Alberto Romero's avatar

Okay, I buy your last point. But how do you make people pay attention to that if they really *are bored*. I have a very good bird's eye view on this and believe me, half the world is already at something else. If Leopold is right (I don't think so but worth thinking about) how do you make those people pay attention back except by acknowledging their experience and contrasting it with a different point of view, like yours?

Expand full comment