https://en.wikipedia.org/wiki/A_New_Kind_of_Science
But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.
From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).
Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).
At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.
But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.
Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.
like, the part where they get a_i log p_i , well, the sum of this over i is gives the number, but it seemed like they were treating this as… a_i being a random variable associated to p_i , or something? I wasn’t really clear on what they were doing with that.
Take an $n$, chosen from $[N,2N]$. Take it's prime factorization $n = \prod_{j=1}^{k} q_j^{a_j}$. Take the logarithm $\log(n) = \sum_{j=1}^{k} a_j \log(q_j)$.
Divide by $\log(n)$ to get the sum equal to $1$ and then define a weight term $w _ j = a_j \log(q_j)/\log(n)$.
Think of $w_j$ as "probabilities". We can define an entropy of sorts as $H_{factor}(n) = - \sum_j w_j \log(w_j)$.
The mean entropy is, apparently:
$$ E_{n \in [N,2N]}[ H_{factor}(n) ] = E_{n\in[N,2N] [ - \sum_j w_j(n) \log(w_j(n)) ] $$
Heuristics (such as Poisson-Dirichlet) suggest this converges to 1 as $N \to \infty$.
OpenAI tells me that the reason this might be interesting is that it's giving information on whether a typical integer is built from one, or a few, dominant prime(s) or many smaller ones. A mean entropy of 1 is saying (apparently) that there is a dominant prime factor but not an overwhelming one. (I guess) a mean to 0 means dominant prime, mean to infinity means many small factors (?) and oscillations mean no stable structure.