32GB M1 Max is taking 25 seconds on the exact same prompt as in the example.
Edit: it seems the "per second" requires the `--continuous` flag to bypass the initial startup time. With that, I'm now seeing the ~1 second per image time (if initial startup time is ignored).
Everytime i execute: python main.py \
"a beautiful apple floating in outer space, like a planet" \
--steps 4 --width 512 --height 512
It redownloads 4 gigs worth of stuff every execution. Can't you have the script save, and check if its there, then download it or am I doing something wrong?
For me it does not re-download anything on the second run. But it is also only running on the CPU and is slow AF.
With 5 iterations the quality is...not good. It looks just like Stable Diffusion with low iteration count. Maybe there is some magic that kicks in if you have a more powerful Mac?
This is awesome! It only takes a few minutes to get installed and running. On my M2 mac, it generates sequential images in about a second when using the continuous flag. For a single image, it takes about 20 seconds to generate due to the initial script loading time (loading the model into memory?).
I know what I'll be doing this weekend... generating artwork for my 9 yo kid's video game in Game Maker Studio!
Does anyone know any quick hacks to the python code to sequentially prompt the user for input without purging the model from memory?
Answered my own question. Here's how to add an --interactive flag to the script to continuously ask for prompts and generate images without needing to reload the model into memory each time.
I've got a 500Mb wifi connection. it took me less than 5 minutes from git clone to having my first image (I did have python installed already, though).
Likely not show any generative software till macOS (next ver) comes out, they don’t usually showcase stand alone features without a bigger strategy to include the OS
* on line 17 of main.py change torch.float32 to torch.float16 and change mps:0 to cuda:0
* add a new line after 17 `model.enable_xformers_memory_efficient_attention()`
The xFormers stuff is optional, but it should make it a bit faster.
For me this got it generating images in less than second [00:00<00:00, 9.43it/s] and used 4.6GB of VRAM.
Mac shortcuts are exactly the use case for this. Menu bar, ask for a prompt, run script. I was always wary of shortcuts, but they're quite powerful and nicely integrated with the OS in the latest versions
Edit: it seems the "per second" requires the `--continuous` flag to bypass the initial startup time. With that, I'm now seeing the ~1 second per image time (if initial startup time is ignored).
It redownloads 4 gigs worth of stuff every execution. Can't you have the script save, and check if its there, then download it or am I doing something wrong?
With 5 iterations the quality is...not good. It looks just like Stable Diffusion with low iteration count. Maybe there is some magic that kicks in if you have a more powerful Mac?
I know what I'll be doing this weekend... generating artwork for my 9 yo kid's video game in Game Maker Studio!
Does anyone know any quick hacks to the python code to sequentially prompt the user for input without purging the model from memory?
https://github.com/replicate/latent-consistency-model/commit...
A few minutes? I have to download at least 5GiB of data to get this running.
Follow the instructions. Before actually running the command to generate an image.
Open up main.py Change line 17 to model.to(torch_device="cpu", torch_dtype=torch.float32).to('cpu:0')
Basically change the backend from mps to cpu
* after `pip install -r requirements.txt` do `pip3 install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121`
* on line 17 of main.py change torch.float32 to torch.float16 and change mps:0 to cuda:0
* add a new line after 17 `model.enable_xformers_memory_efficient_attention()`
The xFormers stuff is optional, but it should make it a bit faster. For me this got it generating images in less than second [00:00<00:00, 9.43it/s] and used 4.6GB of VRAM.