when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. “I’m disappointed by the naivete of the questions that you’re communicating,” he grumbles. “I deserve better.”
“There’s a tradition of scientists approaching senility to come up with grand, improbable theories,” the late physicist Freeman Dyson told Newsweek back in 2002. “Wolfram is unusual in that he’s doing this in his 40s.”
> “There’s a tradition of scientists approaching senility to come up with grand, improbable theories,” the late physicist Freeman Dyson told Newsweek back in 2002. “Wolfram is unusual in that he’s doing this in his 40s.”
That is a brutal take down. Did Dyson and Wolfram have a math-beef going or something?
I think it's more that Wolfram has stepped on enough toes that he earns takes like this.
He's an interesting character, and rare in that is his both obviously very intelligent, and yet not nearly as intelligent as he thinks he is.
I suspect he's the sort of person who can't stand the idea that he is not the smartest guy in the room - in perception or reality. He may well have constructed his career as an "outsider" to reduce the occurrences of this, perhaps not intentionally.
I met Wolfram some 20 years before that review when he was on a world tour promoting the earliest iterations of Mathematica, the first iteration post his symbolic differentation work.
This was the period when cellular automata, Mandelbrot sets, and symbolic math were pretty hot topics about math departments - computer assisted proofs on monster groups in symbolic algebra were recent, Cayley (the first iteration of Magma) was being written at Sydney University, etc.
Even then he had many of the traits that Cosma Shalizi described in the linked review above and was already dismissing various people for their 'poor ideas' and later claiming those ideas as his own.
He's a smart guy. He swam in waters filled with smart people, some smarter. He was never, IMHO, as smart as his own legend, as authored by himself.
I like Freeman Dyson, but it seems like he really didn't hold punches much. He seems very laid back and is to a degree, but if you watch a lot of interviews with him, you'll notice he throws around quite a lot of shade.
So, story time. I once interviewed Stephen Wolfram for IEEE's software engineering radio and I had a lot of fun doing it and he did to.
We ended up running way overtime because he was having fun showing me things with Mathematica. He is a fascinating person, I successfully kept him off talking about his math / physics theories and on the idea of a programming language leading to better thinking and more break-throughs.
I left the discussion pretty impressed by him and he did in the discussion have some vague worries that he maybe got so focused on the idea of a notation for science in Mathematica that he neglected the actual work that sent him on this path. But he wasn't sure that the notation wasn't more valuable itself.
Notebooks, like Jupter, clearly came from his work and the other thing that hasn't reached mainstream he seems to have invented is having data sort of embedded in the programming language, in standard libraries, where it's easy to get the number of calories in the moon if it were made of cheese or whatever.
> Notebooks, like Jupter, clearly came from his work
While I often hear this claim from Wolfram and his supporters, I have never seen any evidence that it was his innovation. MathCAD was the first software released with a notebook interface, and there was research using those ideas prior to the release of the first Mathematica notebook. Maybe his particular take was an improvement on the others, but the claim that it was entirely his idea seems to me to be 100% incorrect.
> having data sort of embedded in the programming language, in standard libraries, where it's easy to get the number of calories in the moon if it were made of cheese or whatever.
I predict that within 100 years, computers will be twice as powerful, ten thousand times larger, and so expensive that only the five richest kings of Europe will own them. -- Professor John Frink
The units file itself is a worthy read just for the commentary.
Ha! That's marvelous. The rants on `mol`, `hertz` and `candela` are astonishing. Well worth the price.
// WARNING: Use of "Hz" will cause communication problems, errors, and make
// one party or another look insane in the eyes of the other.
//
// In other words, if you use the Hz in the way it's currently defined by the
// SI, as equivalent to 1 radian/s, you can point to the SI definitions and
// prove that you follow their definitions precisely. And your physics
// teacher will *still* fail you and your clients will think you're completely
// incompetent because 1 Hz = 2 pi radians/s. And it has for centuries.
// You are both simultaneously both right and both wrong.
// You cannot win.
// You are perfectly right. You are perfectly wrong. You look dumb and
// unreasonable. The person arguing the opposite looks dumb and unreasonable.
//
// Hz == YOU CANNOT WIN
//
// (Insert "IT'S A TRAP" image here.)
I'm surprised Stephen didn't claim he invented saving files. I used Macsyma before Wolfram and we had front ends that had worksheets. He definitely wasn't the first.
Agree with most everything, your description resonates (also a since 1988 mathematica person).
Theo Gray is who came up with the notebook, that iPython -> Jupyter were a multilanguage shout-out to, and they cite such. Other UIUC professors wrote significant parts of Mathematica originally, were paid for such.
Mathematica has no notation, and that's the worst thing about it.
Mathematics has M-Expressions, like S-Expressions, which are extremely powerfully and human-like for reasoning in multiple logical (not geometric) dimensions (using Lisp-style macro expansion)
It's been a long time since the discussion, but I think he was getting at the whorf hypothesis. If you could express certain ideas easily it would enable thinking about certain things more deeply and intelligently.
He is bright. He has important things to say. Learning what he has to say takes way too much time. Until he gets a merciless editor, I won't listen to him.
I remember buying his A New Kind of Science book before it was available for free. It was interesting to read, and it was good enough to impress a college kid like me back then. But now, looking back, I wonder what fields of science has it advanced? It's been more than 20 years already, and with a title like that, we'd expect a completely upturned physics, biology, and other disciplines based on it.
After pondering it for a while, the problem is that CAs are too chaotic to use, and there is simply no way to overcome this, nor is there even particularly a reason to try.
Anything that is Turing Complete is going to exhibit at least some degree of what I call "Turing Chaos", which is the sort of chaos you have in trying to understand what a given Turing Complete system is going to do in light of the fact that a Turing Complete system is going to include some equivalent of "if (something) { run this program } else { run that program }", which means that there is inevitably going to be uncertainty amplification in any attempt to understand a program. By "uncertainty amplification" I mean exactly what anyone who has every tried to understand a code base has been through; you can tell that your uncertainty about what the "something" value is gets amplified into the question of which entire program is being run, and that can iterate for quite a while. It's very chaotic.
However, for all that, and despite the famous way in which changing a single bit of a program may completely change how it operates, in practice with real human programs changing a single random bit is statistically most likely to have no user-visible impact. We spend a lot of time constraining our system's chaos. We have to. We can't work with systems in which literally every bit change completely changes the program.
However, CAs tend to work that way. A single bit flip will spread out at the relevant "speed of light" and change everything.
As a result, while they may be some of the simplest Turing Complete things, they are humanly useless. They are not useful for modeling processes; you have to be too precise with the initial states, and the thing you are modeling has to be too precise in its usage of the CA rules. They are not useful for engineering, which is precisely why we don't use them.
Or, to put it in a nutshell, while A New Kind Of Science is full of pretty pictures and legitimately interesting ideas... it's also in essence, comprehensively wrong. Not a "not even wrong"; it rises to the level of "real" wrongness. But it's comprehensively, from top to bottom, wrong about practical utility or any future practical utility.
(You can sit down and try to strip this characteristic from a sufficiently well-designed CA, but getting the precise balance of just the right amount of chaos is going to be difficult, and getting it to be also somehow useful afterwards raising the bar even higher. In the meantime, I've got von Neumann machines right here for people who want to do real work and the lambda calculus for people who want to work directly in mathematical abstractions without going insane, so... why?)
> A New Kind Of Science is full of pretty pictures and legitimately interesting ideas... it's also in essence, comprehensively wrong. Not a "not even wrong"; it rises to the level of "real" wrongness. But it's comprehensively, from top to bottom, wrong about practical utility or any future practical utility.
Yeah, after 20 some years, that's has to be the answer. At its basic level I think it just exploited the idea that people, including me, like to see interesting or complicated patterns, especially arising out of simple iterative rules like https://en.wikipedia.org/wiki/Rule_30 or https://en.wikipedia.org/wiki/Rule_110.
Of course, I can see how snail pattern or other natural patterns might be generated by a similar process but it's nowhere near revolutionizing any science like the title was claiming.
But Wolfram being Wolfram doesn't give up. There is the https://www.wolframinstitute.org and there is some activity there. I periodically drop by to see what's happening.
> They are not useful for engineering, which is precisely why we don't use them.
Exactly, we'd think by now they'd be some AI super-chip or something tangible based on of the cellular automata thing discovered by Wolfram.
He's full of himself but has interesting things to say.
WolframAlpha is a gem on its on right. Yeah we have Gemini, GPT, Mixtral but when it comes to actual compositional compute, Wolfram alpha gets you the right answer and shows you the math.
In the for hour conversation on topic, he asks "if you want to know the weights of various dinosaurs, you can ask Wolfram alpha and it will tell you". So I asked it "what's the weight of a stegaussaurus", and it gave me some number. The i asked it "what's the total weight of all the stegaussaurus that ever lived" and it gave me some nonsense about the average [don't remember] for the population of the US. It didn't even understand the question. Calling it compute is overestimating it by as much as Wolfram overestimates himself.
Here's a question. Why isn't Wolfram Research considered a sexy employer? They have cool research and technical problems and cool software. I looked a bit into it but I could only see them hiring in 3rd world countries, and only contractors. So are they a bad employer or what gives?
I had a friend who worked there. A very smart Ivy League-educated math PhD. He was very critical of the work culture at Wolfram Research. Apparently in his experience Stephen Wolfram treated people badly.
- They pay like shit (compared to "sexy employers")
- Headquartered in Champaign, Illinois (not a bad town, but not sexy)
- As "cool" as their software is, not a lot of people use it. Python is eating their lunch, ESPECIALLY outside of academia. Although, they're losing ground in academia as well
- Stephen Wolfram isn't a charismatic leader who is fun to work for. There's no shortage of stories of him short circuiting in meetings and treating employees disrespectfully.
- They're not doing quite as much cutting edge stuff (that matters, at least) these days. Their AI/ML suite isn't that interesting, numpy/scipy does a lot of numerical stuff better, Matlab does a lot of stuff (like digital signal processing, for example) better. And Python, being free and open source, is a better prototyping language for most stuff. Symbolic computing is probably the one place it is actually a leader in... but for so many applications in the real world (engineering, r&d, real-time algorithms, etc) symbolic computing simply isn't needed.
As you hint at, they can attract some talent because there are opportunities to work on some niche stuff that's hard to work on elsewhere. But that's a minority of roles at the company.
Thanks for the insight.
I used to be a mathematician and looked a bit into working as a numerical mathematician on numerical or optimization software. What I noticed is that salaries do tend to be significantly lower than FAANG. Maybe that's because it is a niche and there aren't lots of employers around doing that sort of work.
I think they're very traditional – I don't think software or engineering is their edge, and I suspect they outsource a lot of that to cheaper locations – they seem themselves as being more research focused.
Their Glassdoor reviews used to be a bit dodgy too, nothing you wouldn't be able to guess after watching 10 minutes of any video of you-know-who though.
They are also just small, only a few hundred people if I remember correctly.
My favorite quotes:
when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. “I’m disappointed by the naivete of the questions that you’re communicating,” he grumbles. “I deserve better.”
“There’s a tradition of scientists approaching senility to come up with grand, improbable theories,” the late physicist Freeman Dyson told Newsweek back in 2002. “Wolfram is unusual in that he’s doing this in his 40s.”
That is a brutal take down. Did Dyson and Wolfram have a math-beef going or something?
He's an interesting character, and rare in that is his both obviously very intelligent, and yet not nearly as intelligent as he thinks he is.
I suspect he's the sort of person who can't stand the idea that he is not the smartest guy in the room - in perception or reality. He may well have constructed his career as an "outsider" to reduce the occurrences of this, perhaps not intentionally.
See, for example: A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity (2002) http://bactra.org/reviews/wolfram/
I met Wolfram some 20 years before that review when he was on a world tour promoting the earliest iterations of Mathematica, the first iteration post his symbolic differentation work.
This was the period when cellular automata, Mandelbrot sets, and symbolic math were pretty hot topics about math departments - computer assisted proofs on monster groups in symbolic algebra were recent, Cayley (the first iteration of Magma) was being written at Sydney University, etc.
Even then he had many of the traits that Cosma Shalizi described in the linked review above and was already dismissing various people for their 'poor ideas' and later claiming those ideas as his own.
He's a smart guy. He swam in waters filled with smart people, some smarter. He was never, IMHO, as smart as his own legend, as authored by himself.
We ended up running way overtime because he was having fun showing me things with Mathematica. He is a fascinating person, I successfully kept him off talking about his math / physics theories and on the idea of a programming language leading to better thinking and more break-throughs.
I left the discussion pretty impressed by him and he did in the discussion have some vague worries that he maybe got so focused on the idea of a notation for science in Mathematica that he neglected the actual work that sent him on this path. But he wasn't sure that the notation wasn't more valuable itself.
Notebooks, like Jupter, clearly came from his work and the other thing that hasn't reached mainstream he seems to have invented is having data sort of embedded in the programming language, in standard libraries, where it's easy to get the number of calories in the moon if it were made of cheese or whatever.
While I often hear this claim from Wolfram and his supporters, I have never seen any evidence that it was his innovation. MathCAD was the first software released with a notebook interface, and there was research using those ideas prior to the release of the first Mathematica notebook. Maybe his particular take was an improvement on the others, but the claim that it was entirely his idea seems to me to be 100% incorrect.
https://en.wikipedia.org/wiki/Notebook_interface#History
I predict that within 100 years, computers will be twice as powerful, ten thousand times larger, and so expensive that only the five richest kings of Europe will own them. -- Professor John Frink
The units file itself is a worthy read just for the commentary.
https://frinklang.org/frinkdata/units.txt
Theo Gray is who came up with the notebook, that iPython -> Jupyter were a multilanguage shout-out to, and they cite such. Other UIUC professors wrote significant parts of Mathematica originally, were paid for such.
https://en.wikipedia.org/wiki/Theodore_Gray
Mathematica has no notation, and that's the worst thing about it.
Mathematics has M-Expressions, like S-Expressions, which are extremely powerfully and human-like for reasoning in multiple logical (not geometric) dimensions (using Lisp-style macro expansion)
Anything that is Turing Complete is going to exhibit at least some degree of what I call "Turing Chaos", which is the sort of chaos you have in trying to understand what a given Turing Complete system is going to do in light of the fact that a Turing Complete system is going to include some equivalent of "if (something) { run this program } else { run that program }", which means that there is inevitably going to be uncertainty amplification in any attempt to understand a program. By "uncertainty amplification" I mean exactly what anyone who has every tried to understand a code base has been through; you can tell that your uncertainty about what the "something" value is gets amplified into the question of which entire program is being run, and that can iterate for quite a while. It's very chaotic.
However, for all that, and despite the famous way in which changing a single bit of a program may completely change how it operates, in practice with real human programs changing a single random bit is statistically most likely to have no user-visible impact. We spend a lot of time constraining our system's chaos. We have to. We can't work with systems in which literally every bit change completely changes the program.
However, CAs tend to work that way. A single bit flip will spread out at the relevant "speed of light" and change everything.
As a result, while they may be some of the simplest Turing Complete things, they are humanly useless. They are not useful for modeling processes; you have to be too precise with the initial states, and the thing you are modeling has to be too precise in its usage of the CA rules. They are not useful for engineering, which is precisely why we don't use them.
Or, to put it in a nutshell, while A New Kind Of Science is full of pretty pictures and legitimately interesting ideas... it's also in essence, comprehensively wrong. Not a "not even wrong"; it rises to the level of "real" wrongness. But it's comprehensively, from top to bottom, wrong about practical utility or any future practical utility.
(You can sit down and try to strip this characteristic from a sufficiently well-designed CA, but getting the precise balance of just the right amount of chaos is going to be difficult, and getting it to be also somehow useful afterwards raising the bar even higher. In the meantime, I've got von Neumann machines right here for people who want to do real work and the lambda calculus for people who want to work directly in mathematical abstractions without going insane, so... why?)
Yeah, after 20 some years, that's has to be the answer. At its basic level I think it just exploited the idea that people, including me, like to see interesting or complicated patterns, especially arising out of simple iterative rules like https://en.wikipedia.org/wiki/Rule_30 or https://en.wikipedia.org/wiki/Rule_110.
Of course, I can see how snail pattern or other natural patterns might be generated by a similar process but it's nowhere near revolutionizing any science like the title was claiming.
But Wolfram being Wolfram doesn't give up. There is the https://www.wolframinstitute.org and there is some activity there. I periodically drop by to see what's happening.
> They are not useful for engineering, which is precisely why we don't use them.
Exactly, we'd think by now they'd be some AI super-chip or something tangible based on of the cellular automata thing discovered by Wolfram.
He's also quite a bit less self-aggrandizing.
Otoh, they do seem to like reach other, which in my book does cast some doubt on taleb's ability to judge people...
He's full of himself but has interesting things to say.
WolframAlpha is a gem on its on right. Yeah we have Gemini, GPT, Mixtral but when it comes to actual compositional compute, Wolfram alpha gets you the right answer and shows you the math.
In the for hour conversation on topic, he asks "if you want to know the weights of various dinosaurs, you can ask Wolfram alpha and it will tell you". So I asked it "what's the weight of a stegaussaurus", and it gave me some number. The i asked it "what's the total weight of all the stegaussaurus that ever lived" and it gave me some nonsense about the average [don't remember] for the population of the US. It didn't even understand the question. Calling it compute is overestimating it by as much as Wolfram overestimates himself.
Are you able/allowed to provide more details?
- Headquartered in Champaign, Illinois (not a bad town, but not sexy)
- As "cool" as their software is, not a lot of people use it. Python is eating their lunch, ESPECIALLY outside of academia. Although, they're losing ground in academia as well
- Stephen Wolfram isn't a charismatic leader who is fun to work for. There's no shortage of stories of him short circuiting in meetings and treating employees disrespectfully.
- They're not doing quite as much cutting edge stuff (that matters, at least) these days. Their AI/ML suite isn't that interesting, numpy/scipy does a lot of numerical stuff better, Matlab does a lot of stuff (like digital signal processing, for example) better. And Python, being free and open source, is a better prototyping language for most stuff. Symbolic computing is probably the one place it is actually a leader in... but for so many applications in the real world (engineering, r&d, real-time algorithms, etc) symbolic computing simply isn't needed.
As you hint at, they can attract some talent because there are opportunities to work on some niche stuff that's hard to work on elsewhere. But that's a minority of roles at the company.
Source: Used to work there.
How do they pay compared to other midwestern employers?
Their Glassdoor reviews used to be a bit dodgy too, nothing you wouldn't be able to guess after watching 10 minutes of any video of you-know-who though.
They are also just small, only a few hundred people if I remember correctly.