Readit News logoReadit News
tarruda commented on Emacs is my new window manager (2015)   howardism.org/Technical/E... · Posted by u/gpi
tarruda · 7 days ago
A VM is displayed as a window on the host OS and Emacs is the window manager within that VM window. What's the difference from running emacs directly as an application on the host?
tarruda commented on Z-Image: Powerful and highly efficient image generation model with 6B parameters   github.com/Tongyi-MAI/Z-I... · Posted by u/doener
p-e-w · 8 days ago
The diffusion process is usually compute-bound, while transformer inference is memory-bound.

Apple Silicon is comparable in memory bandwidth to mid-range GPUs, but it’s light years behind on compute.

tarruda · 8 days ago
> but it’s light years behind on compute.

Is that the only factor though? I wonder if pytorch is lacking optimization for the MPS backend.

tarruda commented on Z-Image: Powerful and highly efficient image generation model with 6B parameters   github.com/Tongyi-MAI/Z-I... · Posted by u/doener
vunderba · 8 days ago
I've done some preliminary testing with Z-Image Turbo in the past week.

Thoughts

- It's fast (~3 seconds on my RTX 4090)

- Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048)

- The adherence is impressive for a 6B parameter model

Some tests (2 / 4 passed):

https://imgpb.com/exMoQ

Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images.

tarruda · 8 days ago
> It's fast (~3 seconds on my RTX 4090)

It is amazing how far behind Apple Silicon is when it comes to use non- language models.

Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.

tarruda commented on Mistral 3 family of models released   mistral.ai/news/mistral-3... · Posted by u/pember
constantcrying · 13 days ago
And implicit in this is that it compares very poorly to SOTA models. Do you disagree with that? Do you think these Models are beating SOTA and they did not include the benchmarks, because they forgot?
tarruda · 13 days ago
> Do you disagree with that?

I think that Qwen3 8B and 4B are SOTA for their size. The GPQA Diamond accuracy chart is weird: Both Qwen3 8B and 4B have higher scores, so they used this weid chart where "x" axis shows the number of output tokens. I missed the point of this.

tarruda commented on Mistral 3 family of models released   mistral.ai/news/mistral-3... · Posted by u/pember
constantcrying · 13 days ago
The lack of the comparison (which absolutely was done), tells you exactly what you need to know.
tarruda · 13 days ago
Here's what I understood from the blog post:

- Mistral Large 3 is comparable with the previous Deepseek release.

- Ministral 3 LLMs are comparable with older open LLMs of similar sizes.

tarruda commented on DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]   huggingface.co/deepseek-a... · Posted by u/pretext
hasperdi · 13 days ago
and can be faster if you can get an MOE model of that
tarruda · 13 days ago
Deepseek is already a MoE
tarruda commented on DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]   huggingface.co/deepseek-a... · Posted by u/pretext
TIPSIO · 13 days ago
It's awesome that stuff like this is open source, but even if you have a basement rig with 4 NVIDIA GeForce RTX 5090 graphic cards ($15-20k machine), can it even run with any reasonable context window that isn't like a crawling 10/tps?

Frontier models are far exceeding even the most hardcore consumer hobbyist requirements. This is even further

tarruda · 13 days ago
You can run at ~20 tokens/second on a 512GB Mac Studio M3 Ultra: https://youtu.be/ufXZI6aqOU8?si=YGowQ3cSzHDpgv4z&t=197

IIRC the 512GB mac studio is about $10k

tarruda commented on Leak confirms OpenAI is preparing ads on ChatGPT for public roll out   bleepingcomputer.com/news... · Posted by u/fleahunter
lenkite · 16 days ago
Only a short matter of time before agentic tools start serving ads too - paying user or not. You want to refactor your codebase ? No issue - taking 30 seconds - please view this ad meanwhile.
tarruda · 16 days ago
Only a matter of time before using coding agents with local LLMs is a viable alternative.
tarruda commented on Gemini CLI tips and tricks for agentic coding   github.com/addyosmani/gem... · Posted by u/ayoisaiah
3578987532688 · 18 days ago
My tip: Move away from Google to an LLM that doesn't respond with "There was a problem getting a response" 90% of the time.
tarruda · 18 days ago
I had a terrible first impression with Gemini CLI a few months ago when it was released because of the constant 409 errors.

With Gemini 3 release I decided to give it another go, and now the error changed to: "You've reached the daily limit with this model", even though I have an API key with billing set up. It wouldn't let me even try Gemini 3 and even after switching to Gemini 2.5 it would still throw this error after a few messages.

Google might have the best LLMs, but its agentic coding experience leaves a lot to be desired.

tarruda commented on Show HN: OCR Arena – A playground for OCR models   ocrarena.ai/battle... · Posted by u/kbyatnal
tarruda · 20 days ago
Interesting that the 8B of the Qwen3-VL family 9th place, above a few proprietary models. This thing can run locally with llama.cpp on modest hardware.

u/tarruda

KarmaCake day2689April 10, 2013
About
[ my public key: https://keybase.io/tarruda; my proof: https://keybase.io/tarruda/sigs/LfzoAvuAtqMKfg4heD0NRvBBrY8p1U4AFdWg_LGswnQ ]
View Original