Brilliant, per usual. What a wild reveal at the end!
I wrote something similar: “There is no test for AI contamination. There’s no blood sample for whether the sentence you just wrote was yours or whether your favorite author has now become a purveyor of slop or whether you’re unconsciously reproducing a pattern you absorbed from a thousand ChatGPT outputs.
The asterisk need not be applied to your work. It’s something that now exists in the atmosphere. You breathe it in whether you use the tools or not. And, like radiation, repeated exposure slowly kills you.”
Increasingly I take turgid human-written pieces and just get AI to summarize their key arguments -- which should not, of course, be necessary now that journalists no longer have to fill print columns.
The "cooling water leaking off GPU racks" line is genuinely one of the more striking images I've read lately. The question of intent is real — but what's interesting is that the Jevons dynamic applies here too: as the cost of generating text drops to zero, the attention premium on text that clearly required human thought goes up, not down. AI;DR might be less a boycott and more a new attention market forming in real time.
Haha god damn it Alberto 😂😂 point proven and well done!
What's surprised me is that social platforms don't have user-driven content labelling features to indicate if something was created by AI or not (with some level of differentiation on on the spectrum of what that actually means). The self-reporting isn't perfect, but it would be nice to know if something was AI going into it...
That was my stance, until I read this. I did derive insight from the post, but it was AI. If you labelled as AI-written, I would probably have skipped it.
Here's a question for you: When you do these AI written article experiments, how much effort would you say you apply to prompting and revising in a sort of 'creative director,' manner? Say that Substack had labels and there was an option to indicate the % human vs AI involvement, what would you label this (let's say, 10% increments)? The point I'm trying to get to is: at what rate of human involvement does writing feel human? Do you have a sense for it?
Well, I would say this is 90% me still haha. There's not that much AI writing. But there's some. That's almost always as it is with these experiments. AI can't write something like this no matter how much prompt care you put into it. The thing is that the 10% is completely invisible. Then it will be 20% then 30%. Most AI writing is instantly recognizable. Some will not be.
As for my personal view: I think around 20-30% I would still count it as human. As "acceptable". Ofc it depends which parts are outsourced (it's the argument, some sentences here and there, the main points, the headline, and so on) but if my AI-meter starts to fill up then that's a bad sign for me.
Thanks for the honesty, Alberto! But hey, 90% you - then the irony isn’t maxxed out at all, I would say.. then you tricked me twice!! 😄
And then, I totally agree: it matters A LOT which part is outsourced?
You write:
“Ofc it depends which parts are outsourced (it's the argument, some sentences here and there, the main points, the headline, and so on)”
If AI made the argument and the main points? Then how does that count in the percentage? I would say more than 10 percent - one could argue 60 or 70 percent.
> To a degree you’d rather not know, I am AI. And you did read this.
Would've bet a tenner on that, if I could have but found a sucker. Not out of anything in the tone or writing style or whatever, but because... c'mon, it's a trope verging on cliché for writing about AI writing.
Hahaha then you'd lose your bet! This is 90% written by me. No model can write something like this however hard you try. But there's some AI sprinkled here and there. Just invisible. (Btw, I invented that "cliche" so I'm allowed to exploit it!)
Ach, fair enough. (And I was trying to remember where I'd first seen it!)
> No model can write something like this however hard you try.
I do think it could be arranged, in principle, potentially with a hyper-specific mono-purpose model that would sacrifice aptitude at any other task. But the amount of effort you'd need to input into achieving this result would be "why the fuck even bother" levels. I certainly have neither the time, nor expertise, nor petty spite that would be needed for such an endeavor.
Brilliant, per usual. What a wild reveal at the end!
I wrote something similar: “There is no test for AI contamination. There’s no blood sample for whether the sentence you just wrote was yours or whether your favorite author has now become a purveyor of slop or whether you’re unconsciously reproducing a pattern you absorbed from a thousand ChatGPT outputs.
The asterisk need not be applied to your work. It’s something that now exists in the atmosphere. You breathe it in whether you use the tools or not. And, like radiation, repeated exposure slowly kills you.”
More: https://www.whitenoise.email/p/the-ai-asterisk
Increasingly I take turgid human-written pieces and just get AI to summarize their key arguments -- which should not, of course, be necessary now that journalists no longer have to fill print columns.
The "cooling water leaking off GPU racks" line is genuinely one of the more striking images I've read lately. The question of intent is real — but what's interesting is that the Jevons dynamic applies here too: as the cost of generating text drops to zero, the attention premium on text that clearly required human thought goes up, not down. AI;DR might be less a boycott and more a new attention market forming in real time.
Haha god damn it Alberto 😂😂 point proven and well done!
What's surprised me is that social platforms don't have user-driven content labelling features to indicate if something was created by AI or not (with some level of differentiation on on the spectrum of what that actually means). The self-reporting isn't perfect, but it would be nice to know if something was AI going into it...
That was my stance, until I read this. I did derive insight from the post, but it was AI. If you labelled as AI-written, I would probably have skipped it.
Here's a question for you: When you do these AI written article experiments, how much effort would you say you apply to prompting and revising in a sort of 'creative director,' manner? Say that Substack had labels and there was an option to indicate the % human vs AI involvement, what would you label this (let's say, 10% increments)? The point I'm trying to get to is: at what rate of human involvement does writing feel human? Do you have a sense for it?
Well, I would say this is 90% me still haha. There's not that much AI writing. But there's some. That's almost always as it is with these experiments. AI can't write something like this no matter how much prompt care you put into it. The thing is that the 10% is completely invisible. Then it will be 20% then 30%. Most AI writing is instantly recognizable. Some will not be.
As for my personal view: I think around 20-30% I would still count it as human. As "acceptable". Ofc it depends which parts are outsourced (it's the argument, some sentences here and there, the main points, the headline, and so on) but if my AI-meter starts to fill up then that's a bad sign for me.
Thanks for the honesty, Alberto! But hey, 90% you - then the irony isn’t maxxed out at all, I would say.. then you tricked me twice!! 😄
And then, I totally agree: it matters A LOT which part is outsourced?
You write:
“Ofc it depends which parts are outsourced (it's the argument, some sentences here and there, the main points, the headline, and so on)”
If AI made the argument and the main points? Then how does that count in the percentage? I would say more than 10 percent - one could argue 60 or 70 percent.
> To a degree you’d rather not know, I am AI. And you did read this.
Would've bet a tenner on that, if I could have but found a sucker. Not out of anything in the tone or writing style or whatever, but because... c'mon, it's a trope verging on cliché for writing about AI writing.
Hahaha then you'd lose your bet! This is 90% written by me. No model can write something like this however hard you try. But there's some AI sprinkled here and there. Just invisible. (Btw, I invented that "cliche" so I'm allowed to exploit it!)
Ach, fair enough. (And I was trying to remember where I'd first seen it!)
> No model can write something like this however hard you try.
I do think it could be arranged, in principle, potentially with a hyper-specific mono-purpose model that would sacrifice aptitude at any other task. But the amount of effort you'd need to input into achieving this result would be "why the fuck even bother" levels. I certainly have neither the time, nor expertise, nor petty spite that would be needed for such an endeavor.
You’re going to have to pay people to read this stuff soon
AI;DR.... Should I feel uncomfortable???