28 Comments

Oof, this was rough. I'm off to read more about this practice and what, if anything, is being done to improve the situation. Thank you.

Expand full comment

Hi Daniel, here's some recommended listening: "The Humans in the Machine" - https://irlpodcast.org/season7/episode2/

Description: (Published: October 24, 2023)

"They’re the essential workers of AI — yet mostly invisible and exploited. Does it have to be this way? Bridget Todd talks to data workers and entrepreneurs calling for change."

Expand full comment

Thank you for sharing!

Expand full comment
Nov 17, 2023Liked by Alberto Romero

>> I think it's pretty clear

Not to me. The tone of your message -- and it comes across loud and clear -- is that there is *something* about the behavior of tech companies that distresses you. But what could that be?

What could these companies do differently that would appease your feelings? Fire these kids? Do you think that they and their families would then be better off? I somehow doubt you would endorse that, but I might be wrong. Just spell it out for me. What do you want? What would you want if you lived in one these countries?

Expand full comment
author

That's not the point of my article. I don't have an answer to that question. Perhaps the developed world could stop taking more than they ethically should.

But my point is something else. It is to let you know that the AI companies promising the stars are stepping on the labor of the most vulnerable people in the world. Is this really necessary? Isn't there a more ethical way to do things? If you don't think so, then this article is not for you.

Expand full comment
Nov 18, 2023·edited Nov 18, 2023

Hi Fred, if you're genuinely curious about the millions of humans who work on cleaning up the data to train LLMs and are largely invisible to us & exploited, give this listen: https://irlpodcast.org/season7/episode2/

What to do about it? Well, that's a much larger conversation, but here's a good place to start: https://consilienceproject.org/technology-is-not-values-neutral-ending-the-reign-of-nihilistic-design-2/

Expand full comment

Brilliant, thank you!

Expand full comment

There are content moderators unions being formed to help fight for these workers rights. See the Africa Solutions Media Hub, https://africasolutionsmediahub.org/2023/05/01/workers-at-facebook-tiktok-and-chatgpt-to-register-first-african-content-moderators-union/

Expand full comment
author

Thanks Nicole!

Expand full comment

Wow! Powerful! Thank you so much for this!

Expand full comment

Everything is never, never as it seems. Why do we convince ourselves that it is?

Expand full comment

Glen --

Thanks!

Expand full comment

‘That’s the greatest marketing operation that AI companies have successfully executed: Making people believe that AI is the engine that will take us all, together, to the stars’

Have they? Do people really believe that. Seems quite naive. We have countless technologies that could be employed for the benefit of all, but it doesn’t happen. Why will AI be any different?

Expand full comment

We inherited our morality from apes, and AI will inherit it's morality from us. The ratio between good and evil will probably remain about the same throughout these evolutions, but the scale of power available to both good and evil will grow. As the scale of power available to evil grows, the room for error shrinks. When the room for error shrinks far enough, we arrive at "end times" kinds of events.

Nuclear weapons, a 75 year old technology, are perhaps the simplest example of this equation. One bad day, game over.

Expand full comment

Alberto, thanks for the very thoughtful rant. While I have dimly known of this issue for a long time and probably before OpenAI I think there must’ve been issues with screening of disturbing images. It seems a bit abstract even now. Is it that first worlders are generating heaps of disturbing content because they are immense psychopathic shits? I don’t doubt you, yet I’ve only ever heard of this secondhand. Are the bulk of these screenings coming from ChatGPT or GPT-4 prompting? Perhaps this “digital slavery” needs to be properly illuminated for all of us to thoroughly understand it, because right now it’s very much of an abstraction to me. And I’m not insensitive to this issue. So, if we were to stop using these Third World folks cold turkey, would we see an immediate change somehow on our end? Are they the buffers shielding us from disturbing queries or byproducts of those prompts? Personally, I’d like to come across some source materials to illustrate real examples of this trauma. I always thought this kind of a problem applied more to YouTube to try to keep out the horrific submissions like child and animal torture, and things too horrible to contemplate. So, is this kind of material also grist for the mill in AI prompts?

Well, I don’t seem to have any constructive comments here, and it’s all questions. I’m a bit confused about the actual scope and depth of this. Thanks for your contribution.

Expand full comment

Tricky one I think.

Obviously underage working shouldn't be commonplace, but were they to hire domestically then GPT-4 would cost $70 a month and we'd be moaning that they're too profit-hungry.

shadeapink

Expand full comment
author

I think the world would be better if we paid $70/month for GPT-4 and OpenAI didn't use, indirectly, workers at the other end of the spectrum in terms of abundance, choice, freedom, quality of life, etc. But I'm not saying OpenAI has the ability to redefine the rules of the game. My point is to highlight the cynical contrast between Altman's vision of the future, looking to the stars, while he is, metaphorically, stepping on the most miserable people. And I use Altman because he's the most visible leader, not because he's alone or especially responsible for this. It's the whole tech industry, including the AI space.

Expand full comment

Maybe, but the question is...would we? That is, would we pay $70 a month? I certainly wouldn't. But the very wealthy would, in which case we'd then get articles moaning that these companies only care about developing AI + humanity for the elites.

Apple/Samsung/Tesla etc have the exact same practice with cobalt mining.

The list goes on. Every super successful business or individual has, at some point in their chain, committed or are committing in 'immoral' practices.

Whether or not it/they is/are immoral overall, for me, is tricky to say after inputting all the 'good' they produce for the world.

Btw, I enjoyed the article FWIW and always enjoy these types of debates. Very open to having my mind changed or being corrected :)

Expand full comment
author

Oh, I agree it´s super tricky how to solve it. I think the new part that Altman and the AI industry bring to the table is the promises of an unfathomably better world. as a result of superintelligence. It´s hard to believe that´s the case given how they operate. The ends justify the means and all that, but do they really? This article could be understood as a critique of extractive-colonial capitalism but I think others have written much better and much deeper on that topic. The only part I think I can add is how reality contrasts with appearances. And my goal is not to say: "this needs to be solved right now!" (because I don´t really know how to do that) but to say: "just so you know, this is happening behind the scenes."

Expand full comment

"..but to say: "just so you know, this is happening behind the scenes."..

I see, thank you for clarifying! Yeah, can't really argue much against such a position.

I suppose I'd throw the following question at you (and all readers perhaps!):

--> Big AI company engaging in such hiring practices:

1) Prefer a world with or without them? If without, are you consistent elsewhere?

2) Net benefit or harm, both currently and/or the future?

For me, and potentially (likely) a hot take...

1) With, in spite of potentially said practice

2) Benefit, especially now but progressively less so in the future.

Expand full comment
author

Agree. Answering the first question *with coherence elsewhere* is hard. But! Making a critique of these practices is in itself valuable even if you have to keep playing the game like everyone else. Perhaps that's what these companies do. But I believe they could choose to earn less money and do things more ethically - that's a choice we have that they - those working in data labeling, for instance - don't.

Expand full comment

"...But I believe they could choose to earn less money and do things more ethically "

I mean, maybe. But then we're going back to the (very-real) fact that then they'd have higher costs --> lower revenue... domino effect...which would then impact domestically the most.

Which is my overarching point I guess, sure they (every FTSE100 company probably) *could*, but would they actually be netting society more or less?

In financial markets, there's a criterion for risk investment called "Kelly Critereon".

What it (overly) imply posits is that you should risk should increase with 'potential' I.e., X company you think for whatever reason will 3x in the next 5 years. Y company you think will 1.5x over the same time period. Kelly would, again very simply, suggest to risk more on company X than Y. I.e., $5k investment vs $1k.

Could/should we apply a similar line of thinking here? That is, the larger/more important a company is and the more they demonstrate their immense value/worth to society, should we 'ease the brakes' so to speak?

Really interesting to think about, in my opinion. Thank you for today's brain + philosophy workout!

Expand full comment

I, too, am quite confused by what you are saying. We’re now upset that there are opportunities for young people around the world to work hard and gain experience and earn money, etc. because…. The pay is too low? The tasks are burdensome? Monotonous? Gruesome? What am I missing?

Expand full comment
author

You call that "opportunities" when the person interviewed for the piece has called it "digital slavery"? Just lol. Read the original article and the many others that have been published about this and decide what stance you're taking here.

Expand full comment

I do not understand what you are saying here. I would understand if you wanted the companies to pay more, but that is not what you seem to be saying. You seem to be upset that the companies are giving these kids jobs at all, even though, as you say, they are driven by "pure basic survival necessity".

They probably are supporting their entire families. And you want them all to be fired? Can't be. But what are you saying?

Expand full comment
author

I think it's pretty clear. I'm highlighting the two faces of the modern AI story. Altman is promising superintelligence and a post-scarcity world while he knows he's building his technology on the labor of people on the other side of the spectrum of opportunity, freedom, abundance, etc. Everything he promised he's building for all. If that doesn't make you react, then I don't know what to tell you, really.

Don't get me wrong, he's just playing the same game we all are. This is not to put blame solely on AI companies, but just, again, to remark the cynicism. I can't write about all the problems in AI and its virtues and ignore this terrible reality. It'd be wrong ethically, but above that, dishonest. AI is about narratives now and this one, this one they're hiding it.

Expand full comment