Readit News logoReadit News
ar-nelson commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
socalgal2 · 4 months ago
> it assumes that soon LLMs will gain the capability of assisting humans

No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs

It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.

PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.

ar-nelson · 4 months ago
> It does not assume that progress will be in LLMs

If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.

> You have have 2 AIs, then 4, then 8.... then millions

The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.

Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.

> But the thought experiment doesn't seem indefensible.

The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.

Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.

But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.

Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.

ar-nelson commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
JohnMakin · 4 months ago
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.

The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.

ar-nelson · 4 months ago
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.

The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.

But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!

- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.

- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)

Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.

ar-nelson commented on Show HN: Reactive: A React Book for the Reluctant (written by Claude)   github.com/cloudstreet-de... · Posted by u/DavidCanHelp
ar-nelson · 4 months ago
It's surprisingly funny for AI, but there's just so much of it... It has no sense of pacing. It repeats the same jokes for too long, without including bits of normalcy in between as a breather. Still, it's a lot better than I would have expected from something written 100% by AI, and I'm very curious what the prompt involved.
ar-nelson commented on Show HN: Octelium – FOSS Alternative to Teleport, Cloudflare, Tailscale, Ngrok   github.com/octelium/octel... · Posted by u/geoctl
ar-nelson · 6 months ago
For everyone who's having a hard time parsing what Octelium does, I found this page to be the clearest explanation: https://octelium.com/docs/octelium/latest/overview/how-octel...

It's clearer because, instead of starting with a massive list of everything you could do with Octelium (which is indeed confusing), it starts by explaining the core primitives Octelium is built on, and builds up from there.

And it actually looks pretty cool and useful! From what I can tell, the core funtionality is:

- A VPN-like gateway that understands higher-level protocols, like HTTP or PostgreSQL, and can make fine-grained security decisions using the content of those protocols

- A cluster configuration layer on top of Kubernetes

And these two things combine to make, basically, a personal cloud. So, like any of the big cloud platforms, it does a million things and it's hard to figure out which ones you need at first. But it seems like the kind of system that could be used for a homelab, a small company that wants to keep cloud costs down, or a custom PaaS selling cloud functionality. Neat!

ar-nelson commented on I Switched from Flutter and Rust to Rust and Egui   jdiaz97.github.io/greenbl... · Posted by u/jdiaz97
rossant · 6 months ago
I really like the immediate mode GUI (IMGUI) paradigm. The other day, I looked into whether any web-based IMGUI libraries existed. It seems that HTML and the DOM are designed so differently from IMGUI that such an approach doesn't really make sense, unfortunately, unless everything is rendered manually in a canvas, WebGL, or WebGPU, which brings its own set of challenges.
ar-nelson · 6 months ago
I really like Mithril.js (https://mithril.js.org/), which is, IMO, as close as it gets to web IMGUI. It looks a lot like React, but rendering happens manually, either on each event or with a manual m.redraw() call.
ar-nelson commented on AGI is not multimodal   thegradient.pub/agi-is-no... · Posted by u/danielmorozoff
chrsw · 7 months ago
Before we try to build something as intelligent as a human maybe we should try to build something as intelligent as a starfish, ant or worm? Are we even close to doing that? What about a single neuron?
ar-nelson · 7 months ago
I find it interesting that this kind of "animal intelligence" is still so far away, while LLMs have become so good at "human intelligence" (language) that they can reliably pass the Turing Test.

I think that the LLMs we have today aren't so much artificial brains as they are artificial brain organs, like the speech center or vision center of a brain. We'd get closer to AGI if we could incorporate them with the rest of a brain, but we still have no idea how to even begin building, say, a motor cortex.

ar-nelson commented on Zod 4   zod.dev/v4... · Posted by u/bpierre
ar-nelson · 7 months ago
Obligatory shameless plug whenever Zod is posted: if you want similar, but much more minimal schema validation at runtime, with a JSON representation, try Spartan Schema: https://github.com/ar-nelson/spartan-schema
ar-nelson commented on Affordable Wheel Based Refreshable Braille Display   jacquesmattheij.com/refre... · Posted by u/jacquesm
ar-nelson · 2 years ago
A thought I had while reading this: what about putting a flexible membrane above the wheels (or belt) with the dots? This would require the user to press down to feel the dots, but it would remove the issue of fingers or hair getting caught in the wheels.

u/ar-nelson

KarmaCake day497August 21, 2014
About
meet.hn/city/us-Northampton

Interests:

Open Source, Programming, Remote Work, Web Development, Gaming

---

View Original