Readit News logoReadit News
bsenftner · 8 days ago
If you want to play with this, as in really play, with over a dozen variant models with acceleration loras and a vibrant community, ya gotta check out:

https://github.com/deepbeepmeep/Wan2GP

And the discord community: https://discord.gg/g7efUW9jGV

"Wan2GP" is AI video and images "for the GPU poor", get all this operating with as little as 6GB VRAM, Nvidia only.

diggan · 8 days ago
On the other side, is there any projects focusing on performance instead? I have the VRAM available to run Wan2.1, but still takes minutes per frame. Basically something like what vLLM is for running local LLM weights, but for video/WAN?
bsenftner · 8 days ago
This person here has accelerator loras that reduce the compute from 30+ steps to 4 and 8 steps with minimal quality loss: https://huggingface.co/Kijai/WanVideo_comfy

There are a lot of people focused on performance, various methods, just as there are a lot of people focused on non-performance issues like fine tunes that add aspects the models lack, such as terminology linking professional media terms to the model, the pop culture terminology the model does not know, accuracy of body posture during fight, dance, gymnastic, and sports activity, and then less flashy but pragmatic actions like proper use of tableware, chopsticks, keyboards and musical instruments - complex actions that stand out when done incorrectly or never shown. The model knowledge is high but has limits, which people are adding.

Deleted Comment

bobajeff · 8 days ago
If having only 6GB VRAM is GPU poor then I must be GPU destitute.
giancarlostoro · 7 days ago
Try Framepack... nevermind, even that needs at least 6GB VRAM...

https://github.com/lllyasviel/FramePack

hirako2000 · 8 days ago
It's hard to get an nvidia consumer having then less than 12GB of VRAM, not just these days.

By GPU poor they didn't mean GPUless or GPU of the previous decade. It's on the readme that only Nvidia is supported.

cubefox · 8 days ago
Arguably most interesting facts about the new Wan 2.2 model:

- they are now using a 27B MoE architecture (with two 14B experts, for low level and high level detail), which were usually only used for autoregressive LLMs rather than diffusion models

- the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

- if their benchmarks are reliable, the model performance is SOTA even compared to closed source models

liuliu · 7 days ago
Some facts are wrong:

- The 27B "MoE" are not the MoE commonly referred to in LLM world. It is not MoE on FFN layers. It simply means two different models used for different denoising timestep ranges (exactly the same as SDXL-Base / SDXL-Refiner). Calling it MoE is not technically wrong. But claiming "which were usually only used for autoregressive LLMs rather than diffusion models" is just wrong (not to mention HiDream I1 is a model actually incorporated MoE layers (in FFN layer) and is a diffusion model).

- The A14B models can run on 24GiB VRAM too, with CPU offloading and quantization.

- Yes, it is SotA even including some closed source models.

mandeepj · 8 days ago
> - the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

Seems like you can run it 2 Gpus each having 12 GB VRAM. At least, a breakdown on their GitHub page implied so.

cubefox · 7 days ago
That would be a lot cheaper than an RTX 4090.
CosmicShadow · 8 days ago
Wan2.1 was great, but Wan2.2 is really awesome! Here's some samples I made locally with my 5090:

- https://imgur.com/a/VeTn4Ej

- https://imgur.com/a/CujxVX3

Those were both Image to Video and then I upscaled them to 4k. I made the images using Flux Dev Krea.

Took about 3-4 minutes per video to generate and another 2-3 to upscale. Images took 20-40s to generate.

scroogey · 8 days ago
What did you use to upscale them?
CosmicShadow · 7 days ago
One was with Topaz Video, the other was with SeedVR2.
franky47 · 8 days ago
Quick, someone make a UI for this and call it Obi.
tmikaeld · 7 days ago
The Obi for your Wan

Dead Comment

ahmedhawas123 · 8 days ago
Are there video generation benchmarks similar to how there are benchmarks for LLMs? Reason I ask is because with lots of these models you have to go through a long cycle to get them up and running before you see an output, and often they will break with basic tasks requiring physics, state, etc. Would love to see some comparison of models across basic things like that.
cuuupid · 8 days ago
I’ve been using this via Replicate for a while and it’s honestly amazing while being way cheaper. China is definitely leading on open source
danielbln · 8 days ago
*open weights
ProofHouse · 8 days ago
How can they manage that but not the website?

Deleted Comment