21 Comments
Feb 13, 2023Liked by Alberto Romero

I wonder how did Msft incorporate LLM into the search

Do they create or use another model ground-up that can do traditional search and the stuff that chat-gpt could?

Or they perform a traditional search and using the content of the search results to answer like a chat-got

Or some other ways?

Ultimately, the function of search , I would believe, is to find all relevant, reliable information. It isn’t to process or analyse information. Chat-gpt seems to incline more towards that. So perhaps its better to use these 2 separate , especially when one do not have enough subject matter knowledge.

Expand full comment
author

Spot on. We don't know--and may never know--the answer to your first questions. And I agree with your conclusion; given LMs' state-of-the-art, it's much better to use them separately for now.

Expand full comment
Feb 11, 2023Liked by Alberto Romero

When Sam came on stage he said that the new model was faster, more accurate and more capable than ChatGPT but based on GPT 3.5 and the learnings they did with the ChatGPT Research preview.

I look at this like more like a "Distilled ChatGPT" connected to the internet than a "sparse ultra large LLM" like GPT4 is supposed to be.

That would make sense because if they really want to scale this tech they need to reduce the compute necessary.

If the next model is so compute intensive that it is more in dollars per query than cents per query, I believe they will keep their flagship model to themselves and make it available via ChatGPT-Pro or ChatGPT-Plus to make up for the cost.

Having access to superhuman AI at all would be such a game changer that I still have a hard time truly grasping what that would mean. I guess we will find out soon enough.

Expand full comment
author

I also don't think the model that powers Prometheus is GPT-4, it wouldn't make business sense for OpenAI. People didn't expect ChatGPT, but they're expecting GPT-4. Making it a paid model from the get-go would provide OpenAI with the cash flow they need to keep going.

Expand full comment
Feb 11, 2023·edited Feb 11, 2023

If we think about it, their model 4 most likely has the same flaws than their 3.5 in the sense that it makes stuff up. Partnering with Bing will help them get more feedback, faster so that they can use it against to train the next model.

As we have seen with chinchilla, data is still the bottleneck.

Getting more feedback faster is the only way to speedup T-minus-Singularity.

No matter what Gary Marcus, Yann Lecunn and the likes might say, the data flywheel has begun spinning.

The guy with the biggest flywheel will get a lead, which is why Sam and Satya are now moving so fast. Because they NEED to beat Google at the flywheel.

Expand full comment

There’s a new decentralized chatbot that will be coming out VERY soon. It’s been tested privately, and because it’s decentralized, it’s not biased like other chatbots. Are you willing to try it out once it’s available?

Expand full comment
author

Do you mean the one LAION is making, the open assistant?

Expand full comment

No, this is completely different!

Expand full comment
author

Oh, interesting, and what do you mean by "decentralized"?

Expand full comment

Revisiting our previous conversation, I recommend checking out Corcel at https://app.corcel.io/chat. It's a unique, decentralized AI assistant powered by a network of experts or (MoE). Inquire about its functions and the innovative infrastructure behind it! I think you would find it very interesting!

Expand full comment

Decentralized meaning that it does not have a single central authority controlling it. The network is composed of over 4,000 participants (will be open to more in the coming weeks) who collaborate and work together. It’s also incentivized, so the more you contribute, the more you’re rewarded, and the more valuable the network becomes. It has over 500b parameters and growing. The network will to have multimodality. Right now it’s only natural language. Text, speech, and more will be added in the network!

The exterior use case is similar to companies like Cohere and OpenAI, which sell API access to high-quality models. It’s a plug and play network. Companies could use this network because it is more efficient and cost-effective than using their own large-scale data centers and resources for AI research and experimentation. Using this network, corporations can achieve their desired goals and products in a more efficient way, without having to invest in expensive infrastructure, funding, and talent.

It’s truly a deep dive. I could go on, but when the chatbot is here, I’ll be sure to send you info to test out yourself!

Expand full comment
author

Great, thanks Adrian!

Expand full comment

Hey! Here’s a waitlist for the beta version for the chatbot (Chattensor) that should be released this week! I’m 100% sure you’d be one of the first people who even know what this is. But I think it’s worth checking out for sure! https://docs.google.com/forms/d/e/1FAIpQLScgnn8t5sEcq4rkhH0og3r2-CrVZLJZvBibuqMSav0dvgmPsw/viewform

Expand full comment

Latency of inference seems like a big problem. Would love to hear more.

Expand full comment

It handles the latency of inference through the use of sparse activation of experts and a consensus mechanism to coordinate the inference of thousands of models. The network uses a common encoding of inputs and outputs to ensure that the right models are activated at the right time, and the incentive structure aligns the interests of stakeholders to maximize value extraction from the network API. This helps reduce the latency of inference and improve the overall performance of the network.

Expand full comment