Prompting language is definitely something inbetween programming and natural language. Pure natural language does NOT work well with AI.
The best example of this, is how stable diffusion prompting radically changed over time. It started with people trying to type natural language, the AI did 'understand' the subject matter, but the results were terrible looking.
Then came the Greg Rutkowski era, of cargo-culting prompts, which improved results somewhat.
Then came the big revolution of the NovelAI model. What makes it special, is it uses the danbooru tagging system (definitely NSFW), where an image is tagged via 20-50 1-2 word tags. The tags are not arbitrarily decided upon, but is consistently understood and enforced by the community, avoiding descriptions that don't map to any direct visual feature.
So the novelAI prompts look like this: Masterpiece, 1girl, long hair, red hair, dress, street.
There's no longer any human grammar involved. And the results work way better.
Since then, even the photo-style SD models, are trained from the NovelAI dataset, and use the tagging system originally developed for anime pictures.
What this suggests, is that pure natural language prompting may not be the end result of AI. AI will prefer a subset of natural language that is more precisely and consistently defined, with different grammar rules. That becomes the new prompting language.
I think this applies to text-to-image models but not to language models. Although I haven't specified, I was referring more to the latter because I think those are more advanced in this regard. If LMs are the only ones that evolve to adapt to natural language, then we'll use them much more than any other types of generative AI systems
This old novel called "A Working Theory of Love" is essentially about this guy in Silicon Valley hired by a Ray Kurzweil-esque figure to do lots of manual interaction with a chatbot; basically conversation-engineering, similar to prompt-engineering. Crazy to think this is now a real job instead of just fiction.
Another fantastic nuanced take Alberto. I appreciate the historical comparison to programming languages, abstraction and power vs. ease of use. My thought is that we are likely to see a progression of interfaces that will act as training wheels for “citizens” looking to leverage the strength of LLM’s. As with no-code tools, this will allow easy, basic use of the model, while limiting the value or flexibility one can derive. A combination of deep, recursive prompting along with chaining, will allow prompt engineers to continue to deliver significantly greater value. In addition to prompting skills, these engineers will have to understand how to integrate different models and platforms. Like you, I suspect that these skills will remain valuable and relevant for a number of years. We are far from the point where a model is going to deliver unlimited, valuable responses to our simple verbal commands.
Hey Alberto, kudos on the awesome dive into the world of prompt engineering! Your article totally hit me with that “we're in the future” vibe. It's like we're on the brink of this wild adventure where talking to AI is as normal as asking a buddy for directions. Your breakdown of prompt engineering being this sweet spot between code and natural language made it feel like I'm decoding the secrets of some cool AI language.
I'm totally on board with your idea that prompt engineering could be the English of the future. Imagine casually chatting with our silicon pals! Your analogies, like prompt engineering being the bridge to powerful alien-like AI, got me thinking about the sci-fi vibes of it all.
Big thanks for unraveling the complexities and making prompt engineering sound like the must-have skill for the tech-savvy future. Exciting times ahead, and I'm pumped to see where prompt engineering takes us! Cheers for the mind-blowing insights!
Prompting language is definitely something inbetween programming and natural language. Pure natural language does NOT work well with AI.
The best example of this, is how stable diffusion prompting radically changed over time. It started with people trying to type natural language, the AI did 'understand' the subject matter, but the results were terrible looking.
Then came the Greg Rutkowski era, of cargo-culting prompts, which improved results somewhat.
Then came the big revolution of the NovelAI model. What makes it special, is it uses the danbooru tagging system (definitely NSFW), where an image is tagged via 20-50 1-2 word tags. The tags are not arbitrarily decided upon, but is consistently understood and enforced by the community, avoiding descriptions that don't map to any direct visual feature.
So the novelAI prompts look like this: Masterpiece, 1girl, long hair, red hair, dress, street.
There's no longer any human grammar involved. And the results work way better.
Since then, even the photo-style SD models, are trained from the NovelAI dataset, and use the tagging system originally developed for anime pictures.
What this suggests, is that pure natural language prompting may not be the end result of AI. AI will prefer a subset of natural language that is more precisely and consistently defined, with different grammar rules. That becomes the new prompting language.
I think this applies to text-to-image models but not to language models. Although I haven't specified, I was referring more to the latter because I think those are more advanced in this regard. If LMs are the only ones that evolve to adapt to natural language, then we'll use them much more than any other types of generative AI systems
This old novel called "A Working Theory of Love" is essentially about this guy in Silicon Valley hired by a Ray Kurzweil-esque figure to do lots of manual interaction with a chatbot; basically conversation-engineering, similar to prompt-engineering. Crazy to think this is now a real job instead of just fiction.
I hope you are right!
Another fantastic nuanced take Alberto. I appreciate the historical comparison to programming languages, abstraction and power vs. ease of use. My thought is that we are likely to see a progression of interfaces that will act as training wheels for “citizens” looking to leverage the strength of LLM’s. As with no-code tools, this will allow easy, basic use of the model, while limiting the value or flexibility one can derive. A combination of deep, recursive prompting along with chaining, will allow prompt engineers to continue to deliver significantly greater value. In addition to prompting skills, these engineers will have to understand how to integrate different models and platforms. Like you, I suspect that these skills will remain valuable and relevant for a number of years. We are far from the point where a model is going to deliver unlimited, valuable responses to our simple verbal commands.
Hey Alberto, kudos on the awesome dive into the world of prompt engineering! Your article totally hit me with that “we're in the future” vibe. It's like we're on the brink of this wild adventure where talking to AI is as normal as asking a buddy for directions. Your breakdown of prompt engineering being this sweet spot between code and natural language made it feel like I'm decoding the secrets of some cool AI language.
I'm totally on board with your idea that prompt engineering could be the English of the future. Imagine casually chatting with our silicon pals! Your analogies, like prompt engineering being the bridge to powerful alien-like AI, got me thinking about the sci-fi vibes of it all.
Big thanks for unraveling the complexities and making prompt engineering sound like the must-have skill for the tech-savvy future. Exciting times ahead, and I'm pumped to see where prompt engineering takes us! Cheers for the mind-blowing insights!
• For similar topics, read this article: https://www.techtopia5.com/2024/01/prompt-engineering.html
• Also follow: https://www.techtopia5.com/