Readit News logoReadit News
abetusk commented on What Is Ruliology?   writings.stephenwolfram.c... · Posted by u/helloplanets
chvid · 2 days ago
Someone mentioned his apparently failed earlier work ANKOS. I had to look that up - it is 2002 book by Wolfram with seemingly similar ideas:

https://en.wikipedia.org/wiki/A_New_Kind_of_Science

But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.

abetusk · a day ago
Wolfram has failed to live up to his promise of providing tools to make progress on fundamental questions of science.

From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).

Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).

At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.

But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.

Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.

[0] https://en.wikipedia.org/wiki/Rule_110

abetusk commented on A programmer's guide to leaving GitHub   lord.io/leaving-github/... · Posted by u/stackptr
abetusk · 5 days ago
> I have four reasons ...

> First, for independent programmers, I think it's incredibly simple and straightforward to move your personal open source projects off of GitHub.

> Second, although you likely don't pay GitHub to host your open-source projects, they still make money from them!

> Third, GitHub's web interface has been in a steepening decline since the Microsoft acquisition ...

> Finally, I think open source communities, with roots in hacker culture from the 80s and 90s, form a particularly fertile soil for this sort of action.

I'm a programmer. I've set up Gogs, run various Git repos remotely and locally. I understand how simple it is. Simplicity is not the issue.

I host many open source projects on Github, gratis, care of Microsoft. They make money from them? Excuse me while I clutch my pearls.

The web interface is nice enough so that it sets the standard by which I judge other front end GUI wrappers around Git. Is it in decline? I don't know, maybe, but it's still good enough from my perspective. Using Gitlab or Sourcehut is painful. I'm glad they both exist but the UI, in my opinion, is not as good as Github.

Github is, for me, about sociability. I'll go where the people are. I can host my open projects, repos, blog posts, etc. on a server I control but that's not the point. I want people to see my projects, be able to participate in a meaningful way and be social with other projects. In theory, all these can happen on a private server. In practice, the people is what makes the platform attractive.

There are decentralized suggestions in the post, which I appreciate, and I'd like to see more information on how to use them and build a community around those, as that's the only real alternative to centralized platforms that I can envision.

abetusk commented on P vs. NP and the Difficulty of Computation: A ruliological approach   writings.stephenwolfram.c... · Posted by u/tzury
soganess · 9 days ago
Can someone tell me what I am missing here?

This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.

In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.

Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?

abetusk · 9 days ago
I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.

The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.

If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.

So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.

I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.

Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].

k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].

There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].

[0] https://arxiv.org/abs/cs/0212002

[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf

[2] https://arxiv.org/abs/1706.08431

abetusk commented on P vs. NP and the Difficulty of Computation: A ruliological approach   writings.stephenwolfram.c... · Posted by u/tzury
abetusk · 9 days ago
To me, this reads like a profusion of empirical experiments without any cohesive direction or desire towards deeper understanding.
abetusk commented on Show HN: I'm building an AI-proof writing tool. How would you defeat it?   auth-auth.vercel.app/... · Posted by u/callmeed
ephou7 · 11 days ago
I used this:

#!/usr/bin/python3 import subprocess import time import random with open("/tmp/x") as f: t = f.read() for c in t: subprocess.call([ "xdotool", "type", c ]) time.sleep(abs(random.gauss(0,0.07)))

And pasted a random Hacker News comment:

Authenticity Score 81 Highly Authentic

Words per minute: 162 Keystroke variance: 52ms Paste attempts: 0 Window/tab switches: 4 Pauses (≥10s): 0 DOM manipulations: 0

You failed.

abetusk · 10 days ago
`xdotool` is awesome and this is the first I'm hearing of it. Thanks.

Do you have any other command line tool recommendations?

abetusk commented on Show HN: I'm building an AI-proof writing tool. How would you defeat it?   auth-auth.vercel.app/... · Posted by u/callmeed
callmeed · 10 days ago
That's cool, thanks for sharing.

Is there a way to detect this approach?

abetusk · 10 days ago
I think your approach is pretty much fundamentally flawed.

Put it this way, let's say someone recorded typing in the paragraph that you presented but saved the keystrokes, pauses, etc. Now they replay it back, with all the pauses and keystrokes, maybe with the `xdotool` as above, how could you possibly know the difference?

Your method is playing a statistical game of key presses, pauses, etc. Anyone who understands your method will probably not only be able to create a distribution that matches what you expect but could, in theory, create something that looks completely inhuman but will sneak past your statistical tests.

abetusk commented on Two Twisty Shapes Resolve a Centuries-Old Topology Puzzle   quantamagazine.org/two-tw... · Posted by u/tzury
abetusk · 12 days ago
I'm no expert but, from what I understand, the idea is that they found two 3D shapes (maybe 2D skins in 3D space?) that have the same mean curvature and metric but are topologically different (and aren't mirror images of each other). This is the first (non-trivial) pair of finite (compact) shapes that have been found.

In other words, if you're an ant on one of these surfaces and are using mean curvature and the metric to determine what the shape is, you won't be able to differentiate between them.

The paper has some more pictures of the surfaces [0]. Wikipedia's been updated even though the result is from Oct 2025 [1].

[0] https://link.springer.com/article/10.1007/s10240-025-00159-z

[1] https://en.wikipedia.org/wiki/Bonnet_theorem

u/abetusk

KarmaCake day4968July 24, 2014
About
https://abetusk.github.io
View Original