Readit News logoReadit News
herpdyderp · 2 years ago
32GB M1 Max is taking 25 seconds on the exact same prompt as in the example.

Edit: it seems the "per second" requires the `--continuous` flag to bypass the initial startup time. With that, I'm now seeing the ~1 second per image time (if initial startup time is ignored).

m3kw9 · 2 years ago
What does bypass startup time really do? Does it keep everything in memory or something?
fassssst · 2 years ago
Probably, you have to load the weights from disk at some point.
m3kw9 · 2 years ago
Everytime i execute: python main.py \ "a beautiful apple floating in outer space, like a planet" \ --steps 4 --width 512 --height 512

It redownloads 4 gigs worth of stuff every execution. Can't you have the script save, and check if its there, then download it or am I doing something wrong?

jandrese · 2 years ago
For me it does not re-download anything on the second run. But it is also only running on the CPU and is slow AF.

With 5 iterations the quality is...not good. It looks just like Stable Diffusion with low iteration count. Maybe there is some magic that kicks in if you have a more powerful Mac?

simple10 · 2 years ago
Did you enable the virtualenv first? If not, it might not be caching the models properly.
simple10 · 2 years ago
This is awesome! It only takes a few minutes to get installed and running. On my M2 mac, it generates sequential images in about a second when using the continuous flag. For a single image, it takes about 20 seconds to generate due to the initial script loading time (loading the model into memory?).

I know what I'll be doing this weekend... generating artwork for my 9 yo kid's video game in Game Maker Studio!

Does anyone know any quick hacks to the python code to sequentially prompt the user for input without purging the model from memory?

simple10 · 2 years ago
Answered my own question. Here's how to add an --interactive flag to the script to continuously ask for prompts and generate images without needing to reload the model into memory each time.

https://github.com/replicate/latent-consistency-model/commit...

Maxion · 2 years ago
> It only takes a few minutes to get installed and running

A few minutes? I have to download at least 5GiB of data to get this running.

simple10 · 2 years ago
Lol. Yeah, I have 1.2Gb internet.
m3kw9 · 2 years ago
The stupid script seem to not know how to save to disk, so it downloads on every run.
maccard · 2 years ago
I've got a 500Mb wifi connection. it took me less than 5 minutes from git clone to having my first image (I did have python installed already, though).
naet · 2 years ago
Well, how do they look? I've seen some other image generation optimizations, but a lot of them make a significant tradeoff in reduced quality.
oldstrangers · 2 years ago
Interesting timing because part of me thinks Apple's Spooky Fast event has to do with generative AI.
joshstrange · 2 years ago
I think the current rumors are MBPs which would be odd to do the pros before the base models but I wouldn’t complain.
00deadbeef · 2 years ago
Not only odd because of that but because it's less than a year since they got updated to M2 Pro/Max
m3kw9 · 2 years ago
Likely not show any generative software till macOS (next ver) comes out, they don’t usually showcase stand alone features without a bigger strategy to include the OS
hackthemack · 2 years ago
If you want to run this on a linux machine and use the machine's cpu.

Follow the instructions. Before actually running the command to generate an image.

Open up main.py Change line 17 to model.to(torch_device="cpu", torch_dtype=torch.float32).to('cpu:0')

Basically change the backend from mps to cpu

brucethemoose2 · 2 years ago
For linux CPU only, you want https://github.com/rupeshs/fastsdcpu
zorgmonkey · 2 years ago
It is very easy to tweak this to generate images quickly on a nvidia GPU:

* after `pip install -r requirements.txt` do `pip3 install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121`

* on line 17 of main.py change torch.float32 to torch.float16 and change mps:0 to cuda:0

* add a new line after 17 `model.enable_xformers_memory_efficient_attention()`

The xFormers stuff is optional, but it should make it a bit faster. For me this got it generating images in less than second [00:00<00:00, 9.43it/s] and used 4.6GB of VRAM.

agloe_dreams · 2 years ago
This....but a menu item that does it for you.
bigethan · 2 years ago
Mac shortcuts are exactly the use case for this. Menu bar, ask for a prompt, run script. I was always wary of shortcuts, but they're quite powerful and nicely integrated with the OS in the latest versions
m3kw9 · 2 years ago
Gpt4 likely can give you code for this