What You May Have Missed #10
Stable Diffusion 2 / Generative AI news / Meta's Cicero and Neurosymbolic AI / AI misinformation / Four bits on automation / Using robots to talk to animals
Stable Diffusion 2 may not be so bad after all
Stability.ai released Stable Diffusion 2 (SD 2) on November 23. Although highly anticipated, people were disappointed at the company’s decisions regarding changes to the training dataset and the new text/image encoder (OpenCLIP-ViT/H).
I wrote an analysis for TAB this week where I argued SD 2 is technically superior to its predecessors and more artist-friendly, but users wanted—and expected—to keep taking advantage of an all-to-common form of non-accountability.
Because of those dataset changes, prompting techniques that worked with SD may not work with SD 2. People are now trying to figure out ways to update our collective knowledge of SD 2’s latent visual space (i.e. find which prompt heuristics work).
Here’s a thread highlighting the advantages and uniqueness of SD 2, and a few tips on how to prompt it:
SD 2 seemed to be a step back, but some (if not most) of the apparent problems seem to disappear with good prompts. Users should approach the new model as a new tool—re-learning to prompt SD 2 is necessary. As a user says, “yes, you need to work for it with #StableDiffusion2 but you can get an incredible level of detail:”
Also, models built on top of SD may not work with SD 2.
Devs are already shipping solutions: KaliYuga, an artist and developer who works at Stability.ai, trained a Dreambooth model with SD 2 (people were worried it wouldn’t be possible with the new version).
And Pharmapsychotic released a new CLIP interrogator adapted to the new Stable Diffusion encoder.
Generative AI news
Besides Stable Diffusion 2, things keep going at all corners of the generative AI space.