Readit News logoReadit News
gurtinator commented on Fast-DLLM: Training-Free Acceleration of Diffusion LLM   arxiv.org/abs/2505.22618... · Posted by u/nathan-barry
ProofHouse · 2 months ago
Wait, under everything I’ve read about Diffusion Language Models and demos I’ve also seen and tried, inference is faster than traditional architectures. They state the opposite what gives?
gurtinator · 2 months ago
Thats because those demos probably use parallel decoding. In principle, dLLM inference is slower since you have to do bidirectional generation over the whole generation window for each diffusion step. Example; you unmask one token in the 128 window for 128 diffusion steps to generate the full window.
gurtinator commented on Show HN: I invented a new generative model and got accepted to ICLR   discrete-distribution-net... · Posted by u/diyer22
gurtinator · 2 months ago
How did this get accepted without any baseline comparisons? They should have compared this to VQ-VAE, diffusion inpainting and a lot more.

u/gurtinator

KarmaCake day2October 10, 2025View Original