Readit News logoReadit News
fotcorn commented on MacBook Pro with M5 Pro and M5 Max   apple.com/newsroom/2026/0... · Posted by u/scrlk
dirk94018 · 11 days ago
On M4 Max 128GB we're seeing ~100 tok/s generation on a 30B parameter model in our from scratch inference engine. Very curious what the "4x faster LLM prompt processing" translates to in practice. Smallish, local 30B-70B inference is genuinely usable territory for real dev workflows, not just demos. Will require staying plugged in though.
fotcorn · 11 days ago
The memory bandwith on M4 Max is 546 GB/s, M5 Max is 614GB/s, so not a huge jump.

The new tensor cores, sorry, "Neural Accelerator" only really help with prompt preprocessing aka prefill, and not with token generation. Token generation is memory bound.

Hopefully the Ultra version (if it exists) has a bigger jump in memory bandwidth and maximum RAM.

fotcorn commented on Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs   chromewebstore.google.com... · Posted by u/somesortofthing
fotcorn · a month ago
Related to this, how do you get your comments that you add in the review back into your agent (Claude Code, Cursor, Codex etc.)? Everybody talks about AI doing the code review, but I want a solution for the inverse - I review AI code and it should then go away and fix all the comments, and then update the PR.
fotcorn commented on Ask HN: Do you have any evidence that agentic coding works?    · Posted by u/terabytest
sifar · 2 months ago
I am really surprised by this. While I know it can generate correct SIMD code, getting a performant version is non trivial, especially for RVV, where the instruction choices and the underlying micro architecture would significantly impact the performance.

IIRC, Depthwise is memory bound so the bar might be lower. Perhaps you can try some thing with higher compute intensity like a matrix multiply. I have observed, it trips up with the columnar accesses for SIMD.

fotcorn · 2 months ago
I think the ability to actually run the code on the target helped a lot with understanding and optimizing for the specific micro architecture. Quite a few of the ideas turned out to not to be optimal and were discarded.

Also important to have a few test cases the agent can quickly check against, it will often generate wrong code, but if that is easily detectable the agent can fix it and continue quickly.

fotcorn commented on Ask HN: Do you have any evidence that agentic coding works?    · Posted by u/terabytest
fotcorn · 2 months ago
I used Claude Opus 4.5 inside Cursor to write RISC-V Vector/SIMD code. Specifically Depthwise Convolution and normal Convolution layers for a CNN.

I started out by letting it write a naive C version without intrinsic, and validated it against the PyTorch version.

Then I asked it (and two other models, Gemini 3.0 and GPT 5.1) to come up with some ideas on how to make it faster using SIMD vector instructions and write those down as markdown files.

Finally, I started the agent loop by giving Cursor those three markdown files, the naive C code and some more information on how to compile the code, and also an SSH command where it can upload the program and test it.

It then tested a few different variants, ran it on the target (RISC-V SBC, OrangePI RV2) to check if it improves runtime, and then continue from there. It did this 10 times, until it arrived at the final version.

The final code is very readable, and faster than any other library or compiler that I have found so far. I think the clear guardrails (output has to match exactly the reference output from PyTorch, performance must be better than before) makes this work very well.

fotcorn commented on Cartographers have been hiding illustrations inside Switzerland’s maps (2020)   eyeondesign.aiga.org/for-... · Posted by u/mhb
fotcorn · 3 months ago
Seems like the hiker at the bottom of the article was introduced in 1997 and removed only in 2017: https://s.geo.admin.ch/be66brq5oby9
fotcorn commented on JetKVM – Control any computer remotely   jetkvm.com/... · Posted by u/elashri
dmitrygr · 4 months ago
I've been satisfied with NanoKVM lite. Cheap and does what i want.
fotcorn · 4 months ago
I have the PCIe version of NanoKVM, and I am also happy with it.

The big advantage of the PCIe version is that it does not take up space on the desk and all the cables for ATX power control an inside the PC case.

Full-sized HDMI is nice, the only limitation here is 1080p resolution. 1440p or higher would allow mirroring the output on the main monitor to the NanoKVM, but this probably a weird use-case anyway.

fotcorn commented on Qwen3-Coder: Agentic coding in the world   qwenlm.github.io/blog/qwe... · Posted by u/danielhanchen
danielhanchen · 8 months ago
Ye the model looks extremely powerful! I think they're also maybe making a small variant as well, but unsure yet!
fotcorn · 8 months ago
It says that there are multiple sizes in the second sentence of the huggingface page: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct

You won't be out of work creating ggufs anytime soon :)

fotcorn commented on Let me pay for Firefox   discourse.mozilla.org/t/l... · Posted by u/csmantle
fiji-flo · 8 months ago
If you want to help fund Firefox, you can for now just pay for a product https://www.mozilla.org/en-US/products/ and not use it (if you live in a country Mozilla accepts money from). Be vocal about it that you do this to support Firefox (e.g. reply in the discourse thread). I personally recommend leveraging MDN for this as it's right now the closest to Firefox, as in it's part of the Firefox organization within Mozilla. I would hope down the road we could just directly for Firefox, but we need to put money where our mouth is.
fotcorn · 8 months ago
The VPN product is very good, it's basically a thin wrapper around Mullvad, arguably the best VPN on the planet right now. At least from a privacy standpoint.
fotcorn commented on Absolute Zero: Reinforced Self-Play Reasoning with Zero Data   arxiv.org/abs/2505.03335... · Posted by u/leodriesch
QuadmasterXLII · 10 months ago
For everyone who says “modern incentives forbid publishing negative results,” let this stand as a counterexample!
fotcorn · 10 months ago
Why do you think it's a negative result? The table on page 9 shows great results.

u/fotcorn

KarmaCake day802February 6, 2013
About
Contact: <hnusername>@gmail
View Original