Readit News logoReadit News
marvin-hansen commented on Thoughts on Mechanical Keyboards and the ZSA Moonlander   masteringemacs.org/articl... · Posted by u/TheFreim
marvin-hansen · 3 months ago
I got a moonlander for programming. The defaulting tenting is indeed a lame joke, so I got the platform, which is made out of steel(!). It's heavy, but rock solid and stable. I mapped commonly used Fn-key combos on the number rows as long press i.e. cmd-Fn-4 is a long press on 4. The web UI makes this dead simple to setup and customize. That said, I read from the guy who build the Svalboard to put the keyboard on a tray below the desk. I actually did that and, man, that was a revelation. I have one of those motorized desks with adjustable height, and with the tray the Moonlander is now roughly on the same level as the arm rest from the chair. It reduced the tension in my shoulders noticeably. It's a vastly improved typing experience.
marvin-hansen commented on Hermes 4   hermes4.nousresearch.com/... · Posted by u/sibellavia
marvin-hansen · 4 months ago
Complete frustration to use. Yes it’s a bit more considerate, that claim is 100% true. They just didn’t mention that Hermes has zero ability to add context. Meaning, instead of uploading a relevant PDF or text file you either cop paste into the chat box or explain it in dialogue for the next 3 hours. Thought process takes forever. Complete waste of time.
marvin-hansen commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
marvin-hansen · 6 months ago
Okay, read the abstract and Intro. Recently, in the paper

"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"

your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.

On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.

The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?

https://arxiv.org/abs/2507.06952https://arxiv.org/abs/1801.04016

u/marvin-hansen

KarmaCake day83May 29, 2017
About
meet.hn/city/-8.6831728,115.2564626/Sanur

Socials: - github.com/marvin-hansen

Interests: AI/ML, Open Source, Programming

---

View Original