In Search of an Algorithm for Well-Being
An impossible quest in a world optimized for engagement?
A unique feature of our times is that algorithms direct our choices, fill our days, and rule our lives. These opaque and ubiquitous programs that no one completely understands define, for the most part, our non-negligible digital realities.
When you’re listening to music on Spotify or iTunes, playing a vid on YouTube, looking for the next birthday gift on Amazon, binge-watching your favorite show on Netflix, or even searching for the news on Google, it’s an algorithm that decides the pool of options available to you — and, indirectly, what you’ll eventually consume.
Algorithms create funnels in the form of feeds and recommendation systems that bias our perception of reality. This feels fine when Spotify hits the jackpot with a catchy tune, but not so much when Facebook filters events happening on the other side of the world to match what you already want to see.
Such an amount of influence over people entails great consequences, which although not intrinsic to the algorithms, are accurately planned by those who rule the internet — the big tech corporations.
Asking the right questions: Do algorithms need to be this way?
Algorithms give us what we want, and so we want more of them. Their design makes them addictive, and perfectly optimized for us to keep engaging in more content. More time using an app means more profit, and that’s what companies are for. But is this a game in which anything goes?
Tech ethics expert Gemma Galdón says that “we have allowed the tech industry a very anomalous space of non-accountability in our society. And it must be subjected to the same controls as any innovation space that surrounds us.”
When I discuss or read about the effects of AI algorithms and social media, the debate often lingers around the same idea: How can we defend ourselves from this vicious cycle?
I can control my time on social media, or even delete my profiles. I can go off the internet for a few days every month. And I can thoroughly research newspapers to not fall victim to fake news and misinformation. That’s perfectly fine, but there’s something about that narrative that bothers me:
Why does that burden rest upon us? Why aren’t we questioning the frame that defines algorithms as inherently designed to take up our time and attention, exploiting well-known psychological vulnerabilities?
Answers to the question “what can we do to healthily navigate the digital world?” are important and have practical utility. The problem is that by accepting that framing, we implicitly accept that we’ll have to work hard to avoid issues that shouldn’t be there in the first place. Asking the right question is significantly more powerful than giving great answers to the wrong question.
“Why are algorithms optimized for engagement instead of well-being, and what can we do to revert this reality?” That’s the right question.
With very little effort, algorithms could be modified to protect our sensitive psychology instead of exploiting it. They could be trained to optimize well-being instead of engagement.
Lay back and imagine how different the world would be.
The alienated folks at the top of the tech hierarchy don’t contemplate this option. Ex-Facebook president Sean Parker, who was a key person in the early stages of the social network giant, said a few years ago that the main objective at Facebook was: “How do we consume as much of your time and conscious attention as possible?”
Making money is their goal, and our attention is their currency. Their greed blinds them to the aftermath of their ambitions. How many billions are enough billions?
It doesn’t matter if they’re partially ignorant — or aware but unwilling to assume the consequences — of the collateral damage their algorithms generate. The harm is very real, and they are, in one way or the other, fully responsible for the repercussions.
An ongoing fight for AI democracy
There’s a silver lining, though. With the advent of AI ethics, algorithm auditing, and collective open-source projects, tech companies seem to be feeling the pressure to make efforts and adapt their systems to better fit the complexities of our society.
Google, Facebook, and Microsoft have been hiring people coming from all branches of the social sciences to imbue humanness — for lack of a better word — into their technology. AI ethics initiatives promise to reduce bias and toxicity, revert the harmful effects algorithms have on discriminated minorities and transform algorithms into sources of wellness.
However, this extremely necessary endeavor has been finding obstacles at home. Google, which has always proudly claimed to be committed to building a more equal society, fired Timnit Gebru (in 2020) and Margaret Mitchell (in 2021), co-leads of its AI ethics team.
After the controversy dissipated, it became apparent they were terminated for doing what they were hired to do — analyzing potential risks of the company’s tech. It seems that being an ethicist is alright, as long as you don’t interfere too much with the key plans of the company — those that are worth more than people.
If putting people before profit interferes with those plans, isn’t AI ethics a doomed enterprise? If ethicists aren’t free to do their work, which is precisely to hold the companies they work for accountable, are ethical AI approaches a trustworthy mechanism to ensure algorithms are designed and deployed safely and equally?
Events like Google’s firing of its AI ethics leadership made me wonder about the genuineness of big tech’s commitment to this undertaking. The reputation and public image of these companies depend on it (developers and researchers are increasingly hesitant to work for Google now), but if they’re not willing to go as far as it takes to make it worth living in the world they’re building, then internal ethics teams aren’t the solution.
Timnit Gebru is now the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR) and Margaret Mitchell works as a researcher and chief ethics scientist at Hugging Face. They’re both doing crucial work in the same line they were at Google, but now without the limitations of the company’s economic goals.
Sending human-loving people into the lion’s den
Elena Maris, professor of politics of platforms and AI, wrote a thought-provoking piece for Wired earlier this year. She eloquently argued how difficult and problematic is for tech companies to charge ethics teams with the unbearable task of solving all their problems. “We must be honest about what can realistically be accomplished by these piecemeal attempts at stitching sociocultural expertise into technical teams and organizations that ultimately hold all of the power.”
Google and the others are apparently making great efforts to stop, or at least reduce, the harmful consequences of their money-making products. But only as long as those efforts work in their favor. If not, they can simply pull the lever and block those mechanisms from functioning adequately — for instance, by firing their AI ethics leaders.
It’s saddening to see that ethics teams in these companies never had a real opportunity to implement the changes and measures necessary to make algorithms beneficial for society — even if they knew how they’d never be allowed to do it.
Maris ends the article using Mark Zuckerberg as an example: “Do we seriously think [hiring an AI ethics team] would orient his decisions toward equality, true diversity, democracy, or even fairness — all at the cost of profits? Or would we be sending some real human-loving people into the lion’s den?”
How can we trust companies motivated solely by profit to engage in the behaviors needed to turn an algorithm from a money-making machine into a source of well-being? Internal AI ethics teams aren’t working as well as they should (not their fault), so maybe the solution has to come from the outside.
We’ve recently seen critical efforts by people within the AI field—but outside the big tech companies—that are working collectively and individually to turn the tides: BigScience, Hugging Face, EleutherAI, and the Montreal AI Ethics Institute, among others.
Maybe it’s time for people with real political power and influence, like international regulatory institutions, to also step forward and recognize the importance of supervising the companies that have the world at their hands.
Toward human-centered AI
On this point, UNESCO recently established a recommendation plan to make AI algorithms human-centered.
“We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole. We need a human-centred AI. AI must be for the greater interest of the people, not the other way around.” One critical feature they highlight is that AI has been for too long in a “no law zone”:
“AI is already in our lives, directing our choices, often in ways which can be harmful. There are some legislative vacuums around the industry which needs to be filled fast. The first step is to agree on exactly which values need to be enshrined, and which rules need to be enforced. Many frameworks and guidelines exist, but they are implemented unevenly, and none are truly global. AI is global, which is why we need a global instrument to regulate it.”
The agreement was adopted by UNESCO members on November 24th, 2021. It was a crucial first step into regulating those companies that are playing around in alegal spaces with super-powerful technologies.
And just a few months ago, China pioneered an unprecedented regulation to empower people over algorithms. On March 1st the Chinese government activated a law that will allow users to turn off algorithm recommendations completely, among other measures to give people decision power over tech companies.
The legislation, entitled “Regulations on the Administration of Algorithm Recommendations for Internet Information Services,” was jointly designed by the Cyberspace Administration of China (CAC) and another four governmental departments.
The law, which the CAC published in January aims to “regulate the algorithm recommendation activities … protect the legitimate rights and interests of citizens … and promote the healthy development of Internet information services.” (You can read more here.)
The fact that AI ethics has attracted the attention of global regulatory organisms reveals just how significant it is for the individual and collective welfare. We’re at the beginning of the quest to transform flawed algorithms optimized for engagement and attention into well-being algorithms. Let’s build a better world.
This article is an updated version of a previous one published in OneZero.