Readit News logoReadit News

Deleted Comment

throwa356262 commented on Bitcoin tumbles below $70K, heavy losses in cryptocurrencies in last three weeks   bloomberg.com/news/articl... · Posted by u/heresie-dabord
throwa356262 · 16 hours ago
Last time I heard bitcoin was in free fall and doomed it was at $18K...

(I dont own any bitcoin and believe the world would be a better place without cryptocurrencies)

throwa356262 commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
nerdsniper · 17 hours ago
I do believe Apple are still the fastest single-core (M5, A19 Pro, and M3 Ultra leading), which still matters for a shocking amount of my workloads. But only the M5 has any noticeable gap vs Intel (~16%). Also the rankings are a bit gamed because AMD and Intel put out a LOT of SKU's that are nearly the same product, so whenever they're "winning" on a benchmark they take up a bunch of slots right next to eachother even though they're all basically the exact same chip.

Also, all the top nearly 50 multi-core benchmarks are taken up by Epyc and Xeon chips. For desktop/laptop chips that aren't Threadripper, Apple still leads with the M3 Ultra 32-core in multi-core passmark benchmark. The usual caveats of benchmarks not being representative of any actual workload still apply, of course.

And Apple does lag behind in multi-core benchmarks for laptop chips - The M3 Ultra is not offered in a laptop form-factor, but it does beat every AMD/Intel laptop chip as well in multicore benchmarks.

throwa356262 · 16 hours ago
Even at the time of announcement M5 was not the fastest chip. Not even on single core benchmark where apple usually shines due to the design choice of having fewer but more powerful cores (AMD for examples does the opposite). For example on geekbench Core i9-14900KS and Core Ultra 9 285K were faster.

The distance was not huge, maybe 3%. You can obviously pick and choose your benchmarks until you find one where "your" CPU happens to be the best.

throwa356262 commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
ksec · 17 hours ago
>First of all, Apple CPUs are not the fastest. In fact top 20 fastest CPUs right now is probably an AMD and Intel only affair.

You are comparing 256 AMD Zen6c Core to What? M4 Max?

When people say CPU they meant CPU Core, And in terms of Raw Speed, Apple CPU holds the fastest single core CPU benchmarks.

throwa356262 · 17 hours ago
M4 pro 16 cores is #13 among laptops:

https://www.cpubenchmark.net/laptop.html#cpumark

throwa356262 commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
cj · 17 hours ago
For me it’s things like boot speed. How long does it take to restart the computer. To log out, and log back in with all my apps opening.

Mac on intel feels like it was about 2x slower at these basic functions. (I don’t have real data points)

Intel Mac had lag when opening apps. Silicon Mac is instant and always responsive.

No idea how that compares to Linux.

throwa356262 · 17 hours ago
Some of that can be attributed to faster IO.

Something else to consider: chromebook on arm boots significantly faster than dito intel. Yes, nowadays Mediateks latest cpus wipe the floor with intel N-whatever, but it has been like this since the early days when the Arm version was relatively underpowered.

Why? I have no idea.

throwa356262 commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
roomey · 17 hours ago
Genuine question, when people talk about apple silicon being fast, is the comparison to windows intel laptops, or Mac intel architecture?

Because, when running a Linux intel laptop, even with crowd strike and a LOT of corporate ware, there is no slowness.

When blogs talk about "fast" like this I always assumed it was for heavy lifting, such as video editing or AI stuff, not just day to day regular stuff.

I'm confused, is there a speed difference in day to day corporate work between new Macs and new Linux laptops?

Thank you

throwa356262 · 17 hours ago
First of all, Apple CPUs are not the fastest. In fact top 20 fastest CPUs right now is probably an AMD and Intel only affair.

Apples CPUs are most powerful efficient however, due to a bunch of design and manufacturing choices.

But to answer your question, yes Windows 11 with modern security crap feels 2-3 slower than vanilla Linux on the same hardware.

throwa356262 commented on Why I Joined OpenAI   brendangregg.com/blog/202... · Posted by u/SerCe
selfawareMammal · 2 days ago
> it's not just about saving costs – it's about saving the planet. I have joined OpenAI to work on this challenge directly.

I couldn't go on reading.

throwa356262 · 2 days ago
Reminds me of the TechCrunch episode of Silicon Valley TV show. Everyone was there to make the big buck but all collectively pretended they were doing their work for the good of humankind.

This guy and Rob Pike should have a talk.

throwa356262 commented on Monty: A minimal, secure Python interpreter written in Rust for use by AI   github.com/pydantic/monty... · Posted by u/dmpetrov
throwa356262 · 2 days ago
I really like this!

Claude Code always resorts to running small python scripts to test ideas when it gets stuck.

Something like this would mean I dont need to approve every single experiment it performs.

throwa356262 commented on Introducing the Developer Knowledge API and MCP Server   developers.googleblog.com... · Posted by u/gfortaine
throwa356262 · 2 days ago
I need to give this a try, but nowadays I am reluctant to fire up Gemini CLI due to its insane appetite for tokens.

It doesnt matter if your LLM in/out tokens are a bit cheaper than competitors when you use 3x of them on every prompt. Maybe Google should focus on addressing that first?

throwa356262 commented on Evaluating and mitigating the growing risk of LLM-discovered 0-days   red.anthropic.com/2026/ze... · Posted by u/lebovic
throwa356262 · 2 days ago
I just tested this using Calude and at least with 4.5 this does not seem to be possible. The context grows very quickly and the LLM gets lost and starts hallucinating. Maybe I am missing some key ingredient here?

Of course, if you have large team of AI and security experts and an unlimited token budget things can look different.

u/throwa356262

KarmaCake day45January 7, 2026View Original