Tech companies and governments have made the public believe that sharing their information is okay as long as they have nothing to hide (e.g., they are not doing anything illegal). Nothing further from the truth. It conflates having "nothing to hide" with being unaffected by surveillance. Everyone has personal information they wish to keep private, whether medical records, financial details, or personal communications. Privacy is not solely about hiding illegal activities.
Privacy is a form of power—the more others know about you, the more they can try to predict, influence, and interfere with your decisions and behavior. This undermines individual autonomy and democracy itself.
Well said Diego. We forget about this too much, we've normalized it too much. But if they've managed to addict us so heavily and watch us so closely it's because we allowed it. That's why I used the frog example, it's really apt here.
I believe we're living through the end of the Age of Nation States. Governments and corporations are merging into a kind of technological feudalism. I find it strange when people advocate an increase in one to oppose the ills of the other.
Your inspired essay serves as a wake-up call for our generation, but perhaps less so for the next ones. Having interacted with younger generations for the past 15 years, I've come to admire their sharp minds and adaptability.
I believe they are well-informed about the consequences of each interaction with their favorite apps.
Have we really seen the effectiveness of behavioral prediction generated by precise targeting of individuals? Are younger generations more avid consumers than their elders? I don't think so.
It's true that with the rise of totalitarian regimes worldwide, one might think governments could seize individual data and target operations against certain categories of people. But if that's really what we should fear, young people will know how to hack and "pollute" these databases.
Their tech-savviness and rebellious spirit are our best safeguards against dystopian scenarios. While vigilance is necessary, I trust in their ability to outsmart those who seek to control them.
I agree with much that you say. There is definitely a thirst for freedom hardwired into the core nature of every living thing. And Generation Zero have been developing their filters since birth. They came into this world in the midst of a category 5 memestorm. The ruling class may believe that it's game over for us peasants, but I don't think they really understand what the game _is_ yet.
On the other hand, you talk about the "rise of totalitarian regimes". Most people seem to forget (perhaps because it's so commonplace it goes unnoticed) that for a substantial percentage of their life, as an "employee", they submit more or less voluntarily to totalitarian rule. What is happening above us, as corporations and governments merge into our feudal overlords, is that this totalitarian reality is expanding to cover our entire lives.
Employees attempting to negotiate with their bosses to maintain "work/life balance" need to realise: our rulers want everything, they intend for us to become serfs. We cannot negotiate in good faith with these sociopaths. A more adversarial approach might be needed - hacking the system, as you suggest - seizing the tools available to rebuild our social fabric and defend it against their plans for us.
Thanks Riley! Although you exaggerate my role here! I'm just putting out a warning others have before me. Let's keep the conversation on this alive just so we don't forget the world we live in.
Perhaps the problems caused by concentration of power can only be addressed from the inside out, from the bottom up. Even if we were somehow successful in regulating the corporations by top-down methods, wouldn't we have then created an even larger power, an even worse problem?
The algorithms that enmesh us began long ago in much slower form, with writing, hierarchy, laws and other processes combining human minds together in machine-like ways. However, like weeds springing up through cracks in concrete, life is irrepressible.
Our thirst for freedom is also being harvested, along with all our other behavioural data. Our spirit inhabits The Machine. We infest it with the primal drive of all living things to go where we can, do as we will, try every plan.
I am not discussing copyrighted content and companies using it for AI training without permission.
Most other publicly available data on platforms like YouTube, Facebook, etc., is an entirely different story. We care more about free platforms/services than data privacy, and that’s what we are getting. Organizations like Google, Microsoft, and Meta keep the platforms/services free to collect data and make money on advertisements. As someone has said, you are the product if a service is free. Unless we change our mindset and are willing to pay for these services, we cannot complain about what these organizations are doing, and it is hard to pay for something that was given to you for free for years.
Agreed. That's an issue. I think that may be changing in the future. Still, this isn't really a complain but a warning. I understand if people prefer to use these tools for free and be the product themselves. Not judging, just informing.
Well done, Alberto. The younger generation needs support for believing what they feel. Captain Kirk and Spock are laughing at those who believe anything said by a machine that was built/programed by the limited knowledge/experience of a human who never has to use it in real life situations.
Serious question (as I really enjoy your writing on all this): what would genuinely humanist AI development look like, in your mind? Say I had tens of billions of dollars to fund research and development into AI/ML technologies for the simple purpose of increasing aggregate joy and wonder, and reducing pain and toil. How should I spend it? I very much agree that none of the current big companies are benevolent actors, but wonder if this is simply an issue of capitalist incentives and greed, or more fundamental to the whole approach of ML aside from important, specialized niches. It’s not unreasonable to say that some technologies have intrinsically negative moral worth (to use an extreme example, I don’t think there’s such a thing as an ethical gem warfare lab), and there are parts of AI that are similarly unambiguously bad (e.g. autonomous weapons) and largely positive (ML for basic science and drug discovery), so I’d be curious to hear where you’d draw the line.
"wonder if this is simply an issue of capitalist incentives and greed, or more fundamental to the whole approach of ML"
It's the first thing. AI can be unambiguously great. But why steak data from everyone? Capitalism because otherwise you wouldn't be able to compete. Why use the data you take from us? Capitalism because how else will you get a contract with the DoD.
Capitalism makes what could be amazing into a race to the bottom that usually harms the rest of the world. The research itself is fine but not the application and sometimes not even the development.
Certainly capitalism is part of it, but to be honest I’m not so sure it’s all of it. Take AI image generation, for example. As it stands right now, the positive goods (these tools are fascinating and fun to use, and can be a useful starting point for human artistic effort) are in my mind massively outweighed by the negative externalities (deepfakes, fraud, job losses, flooding the internet with mediocre crap as Eric Hoel wrote about earlier this year, plagiarism in training). And while some of those things could be handled better, such as more ethical sourcing and compensation for training data, and more stringent controls for deepfakes, other issues like flooding the public internet and job losses for creatives have no realistic solutions at all. And that’s not even touching the long term problem of epistemic truth collapse as these programs continue to improve! I’m really not convinced AI image generation can be done ethically, though not decided on the issue by any means.
I guess to use an extreme example for what I’m getting at: climate change is a terrible, terrible problem, and nuclear power is nearly carbon free. Building new nuclear plants is very expensive and difficult, and were it cheaper (without compromising safety), it would be easier to address climate change. So making nuclear power more accessible and easier to build safely is very much a good and noble goal… but only to a point. Leaving aside basic technical feasibility, there has to be some level of accessibility where the dangers of weapons proliferation start to outweigh any benefits. A hypothetical breakthrough that could refine uranium to weapons grade for $100 per pound would surely fall on the other side of that line, and if such a technology is possible, I think you’d agree that developing it at all would be extremely bad. My feeling is that there are AI technologies where there’s simply no way to make the net impact on the world positive, for reasons that have nothing at all to do with existential risk or similar catastrophes.
Haha. La revolución (así me puso el artículo, temporalmente; qué influenciable soy). Good article, Alberto. It's important to express yourself openly. Freedom of identity and expression. I respect and appreciate that very much. Many interesting and valid points as well. (I do disagree a bit in some areas regarding the appointment of Mr. Nakasone to the board, as I believe OpenAI will face extremely sophisticated threats both from within the US as well as abroad, and putting Mr. Nakasone there at least scares the US threats and shows the NSA is looking out for certain American companies from hostile American entities, but that's a whole lot of conspiracies and theories to discuss more privately some day). The surveillance aspects are extremely worrying nevertheless, as these issues do affect the lives of regular, innocent people. In most cases, people do not even know that they are being surveilled, yet there is a whole cascade of negative effects that emerge from them being surveilled, both directly and indirectly, and which can harm them or their families in many cases. Have a good evening. :)
If it's not mass surveillance, it's an army of people shoving their phone in your face so they can get content. Why have the state watch everything, that's expensive! You can pit people into ideological wars and have them look for opportunities to mark the other as wrong and be celebrated amongst kindred minds.
I thought the shock was that it had used the data for political propaganda to influence elections 🤔 (also yes, people are surprisingly tech illiterate, which doesn't diminish the gravity of what happened or what's happening today)
They geographically targeted the users with intentionally misleading headlines.
If this particular block in this particular city had a primarily African American population then they used ads sending to fake news websites to enrage that demographic against the opposition.
Hence my comment on postal codes. That was just one example of the many ways they used the ads.
I once asked an admin in our small consulting company (~100 FTEs) if management had the ability to read our emails. He said technically yes, if they wanted to, but frankly, you’re not that important. In other words don’t be so paranoid.
If the question is will some organization, e.g. deep state government or big tech or foreign actors or whatever, is aiming to secretly turn all of humanity into robotic slaves they will just spike the water supply. Beyond defending myself from scam artists and blatant viral attacks on my devices, I don’t think that I will lose any sleep over conspiracy theories for my remaining few years.
The State of California knows it is in a highly responsible position for data processing requisites that impact everyone who uses a Sand Hill invested technology, coded product or AI. The majority of the return exchanges are dependent on what is done with ad surveillance; which has become an intelligence and mass surveillance tool by illiberal technocratic government predators. The best thing to do is to not allow your data to be used in this way.
So, the CCPA, a law passed with increasing regulatory range to manage public legal solutions to privacy impacted consumers, is hosting a Stakeholder meeting to make sure you can cleanly pull your data out of the mix without a 30 tiered nested adver-silo that only benefits the surveillance capital market industry and never you.
I strongly urge the interested to listen and to produce thoughtful written input.
In my view, what is at stake here isn't surveillance, surveillance will become obsolete once you will have deployed a tool of total control, and it won't be governments who will be in the driving seat, they are actually going to be tricked like anyone else in believing that what they get out of the "magic box" has some sort of utility for them, whereas the subtle biases introduced by owners of the data and model will allow them to have almost full control of the decisions that will be taken by anyone who relies upon their technology. Imagine what sort of power you have if you can predict the future simply because you are the one programming the future and serving it to the masses.
Could be but I think you're entering the realm of science fiction too much there. "Programming the future" is not really a thing. I agree the situation is more complex than just government = devil.
Tech companies and governments have made the public believe that sharing their information is okay as long as they have nothing to hide (e.g., they are not doing anything illegal). Nothing further from the truth. It conflates having "nothing to hide" with being unaffected by surveillance. Everyone has personal information they wish to keep private, whether medical records, financial details, or personal communications. Privacy is not solely about hiding illegal activities.
Privacy is a form of power—the more others know about you, the more they can try to predict, influence, and interfere with your decisions and behavior. This undermines individual autonomy and democracy itself.
Well said Diego. We forget about this too much, we've normalized it too much. But if they've managed to addict us so heavily and watch us so closely it's because we allowed it. That's why I used the frog example, it's really apt here.
My parents grew up in the Eastern Block during communism, to this day they don't share sensitive into through the phone.
"You don't know who's listening."
I use to make fun of their paranoia, now I'm more paranoid than they are.
Yep, in this sense it seems authoritarians and capitalists share more than they'd want us to think
I believe we're living through the end of the Age of Nation States. Governments and corporations are merging into a kind of technological feudalism. I find it strange when people advocate an increase in one to oppose the ills of the other.
Your inspired essay serves as a wake-up call for our generation, but perhaps less so for the next ones. Having interacted with younger generations for the past 15 years, I've come to admire their sharp minds and adaptability.
I believe they are well-informed about the consequences of each interaction with their favorite apps.
Have we really seen the effectiveness of behavioral prediction generated by precise targeting of individuals? Are younger generations more avid consumers than their elders? I don't think so.
It's true that with the rise of totalitarian regimes worldwide, one might think governments could seize individual data and target operations against certain categories of people. But if that's really what we should fear, young people will know how to hack and "pollute" these databases.
Their tech-savviness and rebellious spirit are our best safeguards against dystopian scenarios. While vigilance is necessary, I trust in their ability to outsmart those who seek to control them.
The future is not written yet.
I admire your optimism haha. I hope you are right. I'm just doing my part as the warning voice.
I agree with much that you say. There is definitely a thirst for freedom hardwired into the core nature of every living thing. And Generation Zero have been developing their filters since birth. They came into this world in the midst of a category 5 memestorm. The ruling class may believe that it's game over for us peasants, but I don't think they really understand what the game _is_ yet.
On the other hand, you talk about the "rise of totalitarian regimes". Most people seem to forget (perhaps because it's so commonplace it goes unnoticed) that for a substantial percentage of their life, as an "employee", they submit more or less voluntarily to totalitarian rule. What is happening above us, as corporations and governments merge into our feudal overlords, is that this totalitarian reality is expanding to cover our entire lives.
Employees attempting to negotiate with their bosses to maintain "work/life balance" need to realise: our rulers want everything, they intend for us to become serfs. We cannot negotiate in good faith with these sociopaths. A more adversarial approach might be needed - hacking the system, as you suggest - seizing the tools available to rebuild our social fabric and defend it against their plans for us.
Alberto - our last hope - attempting to spark a flicker of agency back into his readers.
I fear we are so enmeshed by algorithms now the only way out is through.
Thanks for trying Alberto, it’s deeply appreciated.
Thanks Riley! Although you exaggerate my role here! I'm just putting out a warning others have before me. Let's keep the conversation on this alive just so we don't forget the world we live in.
Yes! The only way out is through.
Perhaps the problems caused by concentration of power can only be addressed from the inside out, from the bottom up. Even if we were somehow successful in regulating the corporations by top-down methods, wouldn't we have then created an even larger power, an even worse problem?
The algorithms that enmesh us began long ago in much slower form, with writing, hierarchy, laws and other processes combining human minds together in machine-like ways. However, like weeds springing up through cracks in concrete, life is irrepressible.
Our thirst for freedom is also being harvested, along with all our other behavioural data. Our spirit inhabits The Machine. We infest it with the primal drive of all living things to go where we can, do as we will, try every plan.
My two cents:
I am not discussing copyrighted content and companies using it for AI training without permission.
Most other publicly available data on platforms like YouTube, Facebook, etc., is an entirely different story. We care more about free platforms/services than data privacy, and that’s what we are getting. Organizations like Google, Microsoft, and Meta keep the platforms/services free to collect data and make money on advertisements. As someone has said, you are the product if a service is free. Unless we change our mindset and are willing to pay for these services, we cannot complain about what these organizations are doing, and it is hard to pay for something that was given to you for free for years.
Agreed. That's an issue. I think that may be changing in the future. Still, this isn't really a complain but a warning. I understand if people prefer to use these tools for free and be the product themselves. Not judging, just informing.
Same here. I am also not judging anyone. I do in some cases, too.
Well done, Alberto. The younger generation needs support for believing what they feel. Captain Kirk and Spock are laughing at those who believe anything said by a machine that was built/programed by the limited knowledge/experience of a human who never has to use it in real life situations.
Thank you Peter
Serious question (as I really enjoy your writing on all this): what would genuinely humanist AI development look like, in your mind? Say I had tens of billions of dollars to fund research and development into AI/ML technologies for the simple purpose of increasing aggregate joy and wonder, and reducing pain and toil. How should I spend it? I very much agree that none of the current big companies are benevolent actors, but wonder if this is simply an issue of capitalist incentives and greed, or more fundamental to the whole approach of ML aside from important, specialized niches. It’s not unreasonable to say that some technologies have intrinsically negative moral worth (to use an extreme example, I don’t think there’s such a thing as an ethical gem warfare lab), and there are parts of AI that are similarly unambiguously bad (e.g. autonomous weapons) and largely positive (ML for basic science and drug discovery), so I’d be curious to hear where you’d draw the line.
"wonder if this is simply an issue of capitalist incentives and greed, or more fundamental to the whole approach of ML"
It's the first thing. AI can be unambiguously great. But why steak data from everyone? Capitalism because otherwise you wouldn't be able to compete. Why use the data you take from us? Capitalism because how else will you get a contract with the DoD.
Capitalism makes what could be amazing into a race to the bottom that usually harms the rest of the world. The research itself is fine but not the application and sometimes not even the development.
Certainly capitalism is part of it, but to be honest I’m not so sure it’s all of it. Take AI image generation, for example. As it stands right now, the positive goods (these tools are fascinating and fun to use, and can be a useful starting point for human artistic effort) are in my mind massively outweighed by the negative externalities (deepfakes, fraud, job losses, flooding the internet with mediocre crap as Eric Hoel wrote about earlier this year, plagiarism in training). And while some of those things could be handled better, such as more ethical sourcing and compensation for training data, and more stringent controls for deepfakes, other issues like flooding the public internet and job losses for creatives have no realistic solutions at all. And that’s not even touching the long term problem of epistemic truth collapse as these programs continue to improve! I’m really not convinced AI image generation can be done ethically, though not decided on the issue by any means.
I guess to use an extreme example for what I’m getting at: climate change is a terrible, terrible problem, and nuclear power is nearly carbon free. Building new nuclear plants is very expensive and difficult, and were it cheaper (without compromising safety), it would be easier to address climate change. So making nuclear power more accessible and easier to build safely is very much a good and noble goal… but only to a point. Leaving aside basic technical feasibility, there has to be some level of accessibility where the dangers of weapons proliferation start to outweigh any benefits. A hypothetical breakthrough that could refine uranium to weapons grade for $100 per pound would surely fall on the other side of that line, and if such a technology is possible, I think you’d agree that developing it at all would be extremely bad. My feeling is that there are AI technologies where there’s simply no way to make the net impact on the world positive, for reasons that have nothing at all to do with existential risk or similar catastrophes.
Hadn't thought of it that way. 🎯
You just articulated why my Catholicism is lapsed... nobody wants to feel or think they are being surveilled all the time. 🥲 - Lots to ponder here.
Haha. La revolución (así me puso el artículo, temporalmente; qué influenciable soy). Good article, Alberto. It's important to express yourself openly. Freedom of identity and expression. I respect and appreciate that very much. Many interesting and valid points as well. (I do disagree a bit in some areas regarding the appointment of Mr. Nakasone to the board, as I believe OpenAI will face extremely sophisticated threats both from within the US as well as abroad, and putting Mr. Nakasone there at least scares the US threats and shows the NSA is looking out for certain American companies from hostile American entities, but that's a whole lot of conspiracies and theories to discuss more privately some day). The surveillance aspects are extremely worrying nevertheless, as these issues do affect the lives of regular, innocent people. In most cases, people do not even know that they are being surveilled, yet there is a whole cascade of negative effects that emerge from them being surveilled, both directly and indirectly, and which can harm them or their families in many cases. Have a good evening. :)
Love the essay. Since my views on Generative AI have changed, maybe we should do another Substack Exchange! What do you say? :)
If it's not mass surveillance, it's an army of people shoving their phone in your face so they can get content. Why have the state watch everything, that's expensive! You can pit people into ideological wars and have them look for opportunities to mark the other as wrong and be celebrated amongst kindred minds.
I remember when the Cambridge Analytica scandal came out however many years ago. And people were SHOCKED to learn Facebook knows their postal code.
I was spending $10k/day on Facebook ads at the time... And was shocked by their shock.
Those perfectly targeted ads that know you have a problem before you do... How do you think that works? Magic?...
I thought the shock was that it had used the data for political propaganda to influence elections 🤔 (also yes, people are surprisingly tech illiterate, which doesn't diminish the gravity of what happened or what's happening today)
Yep that's true.
They geographically targeted the users with intentionally misleading headlines.
If this particular block in this particular city had a primarily African American population then they used ads sending to fake news websites to enrage that demographic against the opposition.
Hence my comment on postal codes. That was just one example of the many ways they used the ads.
Unsurprising yet terrifying
I once asked an admin in our small consulting company (~100 FTEs) if management had the ability to read our emails. He said technically yes, if they wanted to, but frankly, you’re not that important. In other words don’t be so paranoid.
If the question is will some organization, e.g. deep state government or big tech or foreign actors or whatever, is aiming to secretly turn all of humanity into robotic slaves they will just spike the water supply. Beyond defending myself from scam artists and blatant viral attacks on my devices, I don’t think that I will lose any sleep over conspiracy theories for my remaining few years.
IMHO, -jgp
You have the evidence in front of your eyes. You decide what you do with it. But don't insult us saying it's conspiracy theories.
Meant no harm. Just not drawing the same conclusions. Until the actual end cases play out they are all hypothetical. Have a good day.
The State of California knows it is in a highly responsible position for data processing requisites that impact everyone who uses a Sand Hill invested technology, coded product or AI. The majority of the return exchanges are dependent on what is done with ad surveillance; which has become an intelligence and mass surveillance tool by illiberal technocratic government predators. The best thing to do is to not allow your data to be used in this way.
So, the CCPA, a law passed with increasing regulatory range to manage public legal solutions to privacy impacted consumers, is hosting a Stakeholder meeting to make sure you can cleanly pull your data out of the mix without a 30 tiered nested adver-silo that only benefits the surveillance capital market industry and never you.
I strongly urge the interested to listen and to produce thoughtful written input.
The event link here: https://cppa.ca.gov/meetings/agendas/20240626.pdf
I would love to host any of your public letters. libertyinmanydirections [at] pm. me .
Thank you for paying with your attention. This is worth the seven seconds.
Thank you so much for this.
In my view, what is at stake here isn't surveillance, surveillance will become obsolete once you will have deployed a tool of total control, and it won't be governments who will be in the driving seat, they are actually going to be tricked like anyone else in believing that what they get out of the "magic box" has some sort of utility for them, whereas the subtle biases introduced by owners of the data and model will allow them to have almost full control of the decisions that will be taken by anyone who relies upon their technology. Imagine what sort of power you have if you can predict the future simply because you are the one programming the future and serving it to the masses.
Could be but I think you're entering the realm of science fiction too much there. "Programming the future" is not really a thing. I agree the situation is more complex than just government = devil.