Nice retrospective but I guess this process is no longer needed as model's get better; esp as they start enabling features like consistent subjects. Seems like a lot of overhead to correct text for inspirational images, but I can imagine you need to always present some form of _quality_ to your clients.
Feel like control nets and some minimal photoshop work would've been better.
totally. it got to a point where most of the text generated in our images was incorrect, and so it wasn't a great look showing that to our clients.
we're actually working on some form of what you described where we take images generated from LLMs + add consistent logos discretely rather than generatively.
It's frankly amazing to me that "ask another LLM to evaluate the image" actually produces useful feedback that results in actual improvement from the first LLM.
But then, I guess it's not much different of an idea from the earlier use of GANs, or of telling LLMs to "stop hallucinating", etc.
totally. the way i think about it (purely based on intuition) is that asking an LLM to do understanding + image generation is too complex for it to be effective. if we separate out the tasks into discrete steps, the evaluation becomes better, and the generation simply becomes instruction following.
we're currently in the process of doing this. i think something that could potentially work is to iterate upon the initial image composition / structure using cheaper models, and then upscale at the end. this way you're saving on that iteration cost, but eventually land on a higher-scale image.
This is a wonderful writeup of building a simple agentic system in general. What OP describes is more or less the bare minimum you should be doing at this point to get good (consistent) results from an LLM; single-shot prompting is a thing of the past.
Feel like control nets and some minimal photoshop work would've been better.
we're actually working on some form of what you described where we take images generated from LLMs + add consistent logos discretely rather than generatively.