Readit News logoReadit News
egeres commented on     · Posted by u/Sayuj01
vintagedave · 4 days ago
> generated with AI

Is it the website or icons that are generated with AI?

egeres · 4 days ago
To me it's very clear that the icons have that "stable diffusion trying to make pixel art" style. I think this needs an extra layer of code that gets the generated image and turns it into actual pixel art
egeres commented on Z-Image: Powerful and highly efficient image generation model with 6B parameters   github.com/Tongyi-MAI/Z-I... · Posted by u/doener
SV_BubbleTime · 13 days ago
Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
egeres · 13 days ago
It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step
egeres commented on Z-Image: Powerful and highly efficient image generation model with 6B parameters   github.com/Tongyi-MAI/Z-I... · Posted by u/doener
pawelduda · 13 days ago
Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
egeres · 13 days ago
Incredibly fast, on my 5090 with CUDA 13 (& the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the "Tongyi-MAI/Z-Image-Turbo" model I get:

- 1.5s to generate an image at 512x512

- 3.5s to generate an image at 1024x1024

- 26.s to generate an image at 2048x2048

It uses almost all the 32Gb Gb of VRAM and GPU usage. I'm using the script from the HF post: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

egeres commented on Show HN: Boing   boing.greg.technology/... · Posted by u/gregsadetsky
egeres · 20 days ago
Amazing to see software like this without sign-in requirements or paid subscriptions!
egeres commented on Show HN: Glasses to detect smart-glasses that have cameras   github.com/NullPxl/banray... · Posted by u/nullpxl
egeres · 22 days ago
Super interesting project, at first I thought it would be a naive implementation of YOLO but I wasn't aware about retro-reflections. The papers he linked in the GH discuss very interesting ideas
egeres commented on HyperRogue – A non-Euclidean roguelike   roguetemple.com/z/hyper/... · Posted by u/stared
egeres · 2 months ago
Imagine this inside miegakure (https://miegakure.com/)
egeres commented on YouTube is taking down videos on performing nonstandard Windows 11 installs   old.reddit.com/r/DataHoar... · Posted by u/jjbinx007
breve · 2 months ago
The solution is to run Linux. KDE is a good desktop environment: https://kde.org/

90% of Windows games run on Linux: https://news.ycombinator.com/item?id=45736925

LibreOffice is an okay office suite (good enough for my purposes): https://www.libreoffice.org/

GIMP is a good image editor: https://www.gimp.org/

VLC is a good media player: https://www.videolan.org/vlc/

egeres · 2 months ago
Unfortunately, that last 10% of games are AAA competitive multiplayer that account for a massive user base who are still dependent on windows to play them (battlefield 6, fortnite, any of the call of duty games from the last 8 years, league of legends, GTA online, apex legends, rainbow six siege...)
egeres commented on Optical diffraction patterns made with a MOPA laser engraving machine [video]   youtube.com/watch?v=RsGHr... · Posted by u/emsign
hnthrowawayacct · 2 months ago
This guy always has an unreal amount of engineering lift for hobby videos. A treat to watch every time.
egeres · 2 months ago
It's a completely different level, he has my favorite combo, incredibly detailed videos with fresh and complex engineering ideas
egeres commented on NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference   lmsys.org/blog/2025-10-13... · Posted by u/yvbbrjdr
OliverGuy · 2 months ago
How representative is this platform of the bigger GB200 and GB300 chips?

Could I write code that runs on Spark and effortlessly run it on a big GB300 system with no code changes?

egeres · 2 months ago
All three (GB10, GB200 and GB300) are part of the Blackwell family, which means they have Compute Capability >= 10.X. You could potentially develop kernels to optimize MoE inference (given the large available unified memory, 128Gb, it makes the most sense to me) with CUDA >= 12.9 then ship the fatbins to the "big boys". As many people have pointed out across the thread, the spark doesn't really has the best perf/$, it's rather a small portable platform for experimentation and development
egeres commented on NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference   lmsys.org/blog/2025-10-13... · Posted by u/yvbbrjdr
harias · 2 months ago
Helps with the cooling is my guess. Increased surface area
egeres · 2 months ago
I'm pretty sure they just want to be coherent with the which has that "steel scrubber finish" on the hardware

(photo for reference: https://www.wwt.com/api-new/attachments/5f033e355091b0008017...)

u/egeres

KarmaCake day88May 16, 2021View Original