Readit News logoReadit News
cubefox commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
dagmx · 9 hours ago
Which still come out behind other than multi core, while using substantially more power.

Those panther lake comparisons are from the top end PTL to the base M series. If they were compared to their comparative SKUs they’d be even further behind.

cubefox · 7 hours ago
The article said the M5 has significantly higher single core CPU performance, Panther Lake has significantly higher GPU performance. The Panther Lake devices had OLED screens, which consume significantly more power than LCDs, so they were at a disadvantage.

This was all mentioned in the article.

cubefox commented on Why E cores make Apple silicon fast   eclecticlight.co/2026/02/... · Posted by u/ingve
tyleo · 14 hours ago
These processors are good all around. The P cores kick butt too.

I ran a performance test back in October comparing M4 laptops against high-end Windows desktops, and the results showed the M-series chips coming out on top.

https://www.tyleo.com/blog/compiler-performance-on-2025-devi...

cubefox · 14 hours ago
Here is a more recent comparison with Intel's new Panther Lake chips: https://www.tomsguide.com/computing/cpus/panther-lake-is-int...

Loading parent story...

Loading comment...

cubefox commented on OpenClaw is changing my life   reorx.com/blog/openclaw-i... · Posted by u/novoreorx
cubefox · 19 hours ago
From his previous blog post:

> Generally, I believe [Rabbit] R1 has the potential to change the world. This is a thought that seldom comes to my mind, as I have seen numerous new technologies and inventions. However, R1 is different; it’s not just another device to please a certain niche. It’s meticulously designed to serve one significant goal for all people: to improve lifestyle in the digital world.

cubefox commented on The time I didn't meet Jeffrey Epstein   scottaaronson.blog/?p=953... · Posted by u/pfdietz
moralestapia · 3 days ago
Excerpt from one of the related emails (written by JE):

"great proposal„ however, it needs to be more around deception alice -bob. communication. virus hacking, battle between defense and infiltration.. computation is already looked at in various fields. camoflauge , mimickry, signal processing, and its non random nature, misinformation. ( the anti- truth - but right answer for the moment ).. computation does not involve defending against interception, a key area for biological systems, if a predator breaks the code, it usually can accumulate its preys free energy at a discount . self deception, ( necessary to prevent accidental disclosure of inate algorithms. WE need more hackers , also interested in biological hacking , security, etc."

Damn! I once worked with a guy that was exactly like this. Not just writing but his style of speech irl was like that, incoherent loosely bound ideas around one topic. Ironically, the harder he tried to appear smart the more idiotic were the things that spewed out of his mouth.

We were working with GPUs, trying to find ways to optimize GPU code, he called the team for an informal meeting and told us dead serious, "Why can't you just like, ..., remove the GPUs from the server, then crack them open, turn them outside out and put them back in to see if they perform better". :O

I don't know if this has a name, I just thought the guy had schizophrenia. So glad I moved on from that place.

cubefox · 3 days ago
Sounds like he was confused but genuinely interested in cryptology, which contradicts the cynical narrative about him only donating for social reasons.

Loading parent story...

Loading comment...

cubefox commented on Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation   arxiv.org/abs/2602.00294... · Posted by u/fheinsen
korbip · 4 days ago
This was done already here as well: https://arxiv.org/abs/2507.04239
cubefox · 4 days ago
Sounds interesting, but...

> these models dominate both exponential attention and linear attention at long-context training

There is no exponential attention; standard attention is quadratic. Strange mistake.

cubefox commented on Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation   arxiv.org/abs/2602.00294... · Posted by u/fheinsen
fheinsen · 4 days ago
As the error via linear approximation approaches similar magnitude as numerical error via quadratic computation, don’t the two start becoming comparable in practice?

I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats.

Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN:

https://www.anthropic.com/engineering/effective-context-engi...

cubefox · 4 days ago
That website says nothing about numerical error potentially causing context rot.

Loading parent story...

Loading comment...

Loading parent story...

Loading comment...

u/cubefox

KarmaCake day7072December 17, 2021View Original