BLOOM Is the Most Important AI Model of the Decade
Not DALL·E 2, not PaLM, not AlphaZero, not even GPT-3.
You may be wondering if such a bold headline is true. The answer is yes. Let me explain why.
GPT-3 came out in 2020 and established a new road the whole AI industry has been following in intention and attention since. Tech companies have repeatedly built better, larger models, one after another. But although they’ve put millions into the task, none of them has fundamentally changed the leading paradigm or the game’s rules GPT-3 laid out two years ago.
Gopher, Chinchilla, and PaLM (arguably the current podium of large language models) are significantly better than GPT-3 but they are, in essence, more of the same thing. Chinchilla has proved the success of slightly different scaling laws, but it’s still a large transformer-based model that uses a lot of data and compute, like the others.
DALL·E 2, Imagen, and Parti, although distinct in what they do — text-to-image models that add techniques beyond the transformers —, they’re pretty much based on the same trends. Even Flamingo and Gato, which depart slightly from GPT-3 towards a more generalistic, multimodal approach to AI, are just a remix of the same ideas applied to novel tasks.
But, most importantly, all these AI models stem from the immense resources of private tech companies. That’s the common factor. It’s not just their technical specifications that make them belong to the same package. It’s because a handful of wealthy for-profit research labs exert absolute control over them.
That’s about to change.
BLOOM and BigScience mark an inflection point for the AI community
BLOOM (BigScience Language Open-science Open-access Multilingual) is unique not because it’s architecturally different than GPT-3 — it’s actually the most similar of all the above, being also a transformer-based model with 176B parameters (GPT-3 has 175B) —, but because it’s the starting point of a socio-political paradigm shift in AI that will define the coming years on the field — and will break the stranglehold big tech has on the research and development of large language models (LLMs).
It’s fair to say that Meta, Google, and OpenAI have recently open-sourced some of their large transformer-based models (OPT, Switch Transformers, and VPT, respectively). Is it because they’ve grown a sudden appreciation for open-source? I’m sure most engineers and researchers in those companies have always had it. They know the value of open-source because they use libraries and tools built on its foundations daily. But the companies, as moralless money-making entities, don’t bow so easily before the preferences of the wider AI community.
These companies wouldn’t have open-sourced their models if it wasn’t because of a few institutions and research labs that have started to put incredible pressure toward that direction.
BigScience, Hugging Face, EleutherAI, and others don’t like what the big tech has done to the field. Monopolizing a technology that could — and hopefully will — benefit a lot of people down the line isn’t morally right. But they couldn’t simply ask Google or OpenAI to share their research and expect a positive response. That’s why they decided to build and fund their own — and open it freely to researchers who want to explore its wonders. State-of-the-art AI is no longer reserved for big corporations with big pockets.
BLOOM is the culmination of these efforts. After more than a year of collective work that started in January 2021, and training for +3 months on the Jean Zay public French supercomputer, BLOOM is finally ready. It’s the result of the BigScience Research Workshop that comprises the work of +1000 researchers from all around the world and counts on the collaboration and support of +250 institutions, including Hugging Face, IDRIS, GENCI, and the Montreal AI Ethics Institute, among others.
What they have in common is that they believe technology — and particularly AI — should be open, diverse, inclusive, responsible, and accessible for the benefit of humanity.
Their impressive collective effort and their singular stance within the AI industry are only comparable to the care with which they’ve considered the social, cultural, political, and environmental context that underlies the design of AI models — and BLOOM specifically — and the processes of data selection, curation, and governance.
The members of BigScience have released an ethical charter that establishes the values they hold onto regarding the development and deployment of these technologies. They’ve divided those into two categories — intrinsic, “valuable … as an end,” and extrinsic, “valuable as a means.” I’ll summarize here those values by citing the charter, as I consider each of them critical to understanding the unprecedented significance of BigScience and BLOOM. (I still recommend reading the whole charter. It’s short.)
Intrinsic values
Inclusivity: “…Equal access to the BigScience artifacts … not just non-discrimination, but also a sense of belonging…”
Diversity: “…Over 900 researchers and communities … from 50 countries covering over 20 languages…”
Reproducibility: “...BigScience aims at ensuring the reproduction of the research experiments and scientific conclusions…”
Openness: “…AI-related researchers from all over the world can contribute and join the initiative… [and] the results…will be shared on an open basis…”
Responsibility: “Each contributor has both an individual and a collective [social and environmental] responsibility for their work within the BigScience project…”
Extrinsic values
Accessibility: “As a means to achieve openness. BigScience puts in its best efforts to make our research and technological outputs easily interpretable and explained to the wider public…”
Transparency: “As a means to achieve reproducibility. BigScience work is actively promoted at various conferences, webinars, academic research, and scientific popularization so others can see our work…”
Interdisciplinarity: “As a means to achieve inclusivity. We are constantly building bridges among computer science, linguistics, law, sociology, philosophy, and other relevant disciplines in order to adopt a holistic approach in developing BigScience artifacts.”
Multilingualism: “As a means to achieve diversity. By having a system that is multilingual from its conception, with the immediate goal of covering the 20 most spoken languages in the world…”
BigScience and BLOOM are, without a doubt, the most notable attempt at bringing down all the barriers the big tech has erected — willingly or unwillingly — throughout the last decade in the AI field. And the most sincere and honest undertaking to building AI (LLMs in particular) that benefits everyone.
If you want to know more about the BigScience approach, read this great series of three articles on the social context in LLM research. Access to BLOOM will be available via Hugging Face.
What makes BLOOM different
As I noted in the beginning, BLOOM isn’t the first open-source language model of such size. Meta, Google, and others have already open-sourced a few models. But, as it’s expected, those aren’t the best these companies can offer. Earning money is their main goal, so sharing their state-of-the-art research isn’t on the table. That’s precisely why signaling their intention to participate in open science with these strategic PR moves isn’t enough.
BigScience and BLOOM are the embodiment of a set of ethical values that companies can’t represent by definition. The visible result is, in either case, an open-source LLM. However, the hidden — and extremely necessary — foundations that guide BigScience underscore the irreconcilable differences between these collective initiatives and the powerful Big Tech.
It’s not the same thing to adopt open-source practices begrudgingly, forced by the circumstances, as doing it because you firmly believe is the right approach. BigScience members’ conviction that we should democratize AI and aim at benefiting the largest number of people — by opening access and results or by tackling ethical issues — is what makes BLOOM unique. And it’s also what makes it — arguably, I concede — the most important AI model of the decade.
BLOOM is the spearhead of a field on the verge of radical change for the better. It’s the flag of a movement that goes beyond current research trends. It’s the settlement of a new era for AI that will not only move the field forward faster but force those who would prefer to proceed otherwise to embrace the new rules that now govern the field.
This isn’t the first time open-source has won over privacy and control. We have examples in computers, operating systems, browsers, and search engines. Recent history is filled with clashes between those who wanted to keep the benefits for themselves and those who fought on behalf of everyone else — and won. Open source and open science are the ultimate stages of technology. And we’re about to enter this new era for AI.
This article was also published on Towards Data Science.
You write...
"What they have in common is that they believe technology — and particularly AI — should be open, diverse, inclusive, responsible, and accessible for the benefit of humanity."
Jennifer Doudna takes the same attitude towards her work on CRISPR (genetic engineering). She calls it "democratizing" CRISPR.
While this approach sounds very noble and politically correct, and is sincerely well intended, it can also be viewed as recklessly spreading very powerful technologies to anybody and everybody who wishes to leverage these tools to advance their own goals, no matter what those goals might be.
The underlying problem with this approach is that as the scale of powers available to us grows, the room for error shrinks. Thus, even though most people will probably use these tools for good, those using these tools for harm are increasingly being put in a situation where they can bring down the entire system.
If that sounds like hysterical wild speculation, please consider the current reality TODAY is that a single human being can, in just a few minutes, start a process which ends this civilization. Yep, nuclear weapons. We absolutely refuse to learn the lessons they can teach us. Not a good sign.
Do you still feel the title of this article is correct, Alberto?