SDXL Turbo works best (at least from my trials today) with the LCM sampler, producing better results in fewer iterations with it than it does with Euler A.
Using the comfyui workflow [0] I'm getting really impressive results (obviously, not as quick as single step, but still very fast [1]) at 768x768, 10 steps, using the lcm sampler instead of euler ancestral, and putting CFG at 2.0 instead of 1.0.
Combined with the ComfyUI Manager extensions which provides an index of custom node packages and can install missing ones from a loaded workflow it makes it very easy to get up and running with a new workflow.
Bad news for that: it is only free for noncommercial use, and this isn't just a temporary early-release thing, but the new general direction for StabilityAI:
I mean the fact SD has shown its possible just means that other research groups can also use the same concept...
Someone on Reddit was actually pointing towards a model thats more narrow in scope to SDXL but that was trained by a single guy on an A100, so no reason we can't expect other groups to pop up or maybe a consortium of freelancers from the fine tuning community to maybe get together to start there own base model.
Nope and this is just the beginning imagine a year from now or once the people that bring us LCM and controlnet and ipadapter start looking at the possibilities, not to mention fine tunes on turbo.
I just tried it locally with a 3070 and it was about 3 seconds per render. I'm far from great at this stuff and it was my first use of ComfyUI, so I don't know if that number could be improved on my setup.
You can already do that using existing models, but instead of generating 1 image taking a few seconds it will take at least a minute, perhaps SDXL Turbo brings that down
Code: https://github.com/discus0434/faster-lcm Blog post (in Japanese): https://zenn.dev/discus0434/articles/12427b887b4082
Have no idea how this can be used but they claim 26fps on a RTX 3090.
I wonder if SDXL Turbo + LCM will be a thing, to get to realtime generation
SDXL Turbo works best (at least from my trials today) with the LCM sampler, producing better results in fewer iterations with it than it does with Euler A.
[0] drop this image on the comfyui canvas: https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/sd...
[1] On a 3080Ti laptop card
The barrier is really being lowered and this is beautiful.
What a great time to be alive.
https://twitter.com/EMostaque
https://twitter.com/toyxyz3/status/1729922123119104476
https://nitter.net/toyxyz3/status/1729922123119104476
Someone on Reddit was actually pointing towards a model thats more narrow in scope to SDXL but that was trained by a single guy on an A100, so no reason we can't expect other groups to pop up or maybe a consortium of freelancers from the fine tuning community to maybe get together to start there own base model.
Here's a live demo, but you need to register an account.
https://clipdrop.co/stable-diffusion-turbo
Yes, it is "register an account", but it is the lowest friction I know.