I was surprised at how difficult I found math. Now, I was never really great at math; logic and calculation in the head I could do fairly well (above average), but just foundational knowledge was hard and mathematical theory even harder. But now I even had trouble with integration and differentiation and even with understanding a problem to put it down into a formula. I am far from being the youngest anymore, but I was surprised at how shockingly bad I have become in the last some +25 years. So I decided to change this in the coming months. I think in a way computers actually made our brains worse; many problems can be auto-solved (python numpy, sympy etc...) and the computers work better than hand-held calculators, but math is actually surprisingly difficult without a computer. (Here I also include algorithms by the way, or rather, the theory behind algorithms. And of course I also forgot a lot of the mathematical notation - somehow programming is a lot easier than higher math.)
Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.
The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.
This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).
————
Einstein understood all this:
> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein
I'm not saying it can't be done, clearly it can be done otherwise this article wouldn't exist. But it is not quite as easy as pointing a magic wand (aka an antenna) at a highrise and saying '14th floor, apartment on the North-West corner', though that would obviously make for good cinema.
I am baffled by seriously intelligent people imbuing almost magical powers that can never be replicated to to something that - in my mind - is just a biological robot driven by a SNN with a bunch of hardwired stuff. Let alone attributing "human intelligence" to a single individual, when it's clearly distributed between biological evolution, social processes, and individuals.
>something that - in my mind - is just MatMul with interspersed nonlinearities
Processes in all huge models (not necessarily LLMs) can be described using very different formalisms, just like Newtonian and Lagrangian mechanics describe the same stuff in physics. You can say that an autoregressive model is a stochastic parrot that learned the input distribution, next token predictor, or that it does progressive pathfinding in a hugely multidimensional space, or pattern matching, or implicit planning, or, or, or... All of these definitions are true, but only some are useful to predict their behavior.
Given all that, I see absolutely no problem with anthropomorphizing an LLM to a certain degree, if it makes it easier to convey the meaning, and do not understand the nitpicking. Yeah, it's not an exact copy of a single Homo Sapiens specimen. Who cares.
Ai image generator frequently refuse to create illustrations featuring the character, everybody is afraid of Disney.
Similar, Disney’s Winnie the Puh just looks like Magarete Steiffs plush bear with a red shirt.
Very often those who claim to have created an original work themselves just produced derivatives, at least those should not be protected to the detriment of humankind.
Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.