Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step
Is it the website or icons that are generated with AI?