Discussion about this post

User's avatar
Kai Williams's avatar

I appreciate the essay, though I feel like step 1 is much less responsible for the porridge that is LLM output than steps 2 and 3. LLMs are fantastically weird, faceted objects that needn't be trained on such a narrow distribution.

An example: LLMs have "true sight" more or less -- they can figure out facts about the people who are writing them prompts. So there's some extent to which by just existing, I'll get a different output from an LLM than you will. What would the world look like if LLMs were trained to exaggerate the difference?

I agree with the broad thesis though!

Expand full comment
__browsing's avatar

I'm not sure I'm describe the 21st century as a time of declining deviance by most standards, what with the pride flag and ostentatious neurodiversity and so on.

Expand full comment

No posts

Ready for more?