Remote: Yes
Willing to relocate: Yes
Technologies: C++, Linux
Resume: jtag.bearblog.dev/resume
Email: jakob.koblinsky@gmail.com
Hi. I'm a recent Computer Engineering graduate looking for my first role.
Remote: Yes
Willing to relocate: Yes
Technologies: C++, Linux
Resume: jtag.bearblog.dev/resume
Email: jakob.koblinsky@gmail.com
Hi. I'm a recent Computer Engineering graduate looking for my first role.
This is such a bizarre take.
The relation associating each human to the list of all words they will ever say is obviously a function.
> almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities.
There's a rich family of universal approximation theorems [0]. Combining layers of linear maps with nonlinear cutoffs can intuitively approximate any nonlinear function in ways that can be made rigorous.
The reason LLMs are big now is that transformers and large amounts of data made it economical to compute a family of reasonably good approximations.
> The following is uncomfortably philosophical, but: In my worldview, humans are dramatically different things than a function . For hundreds of millions of years, nature generated new versions, and only a small number of these versions survived.
This is just a way of generating certain kinds of functions.
Think of it this way: do you believe there's anything about humans that exists outside the mathematical laws of physics? If so that's essentially a religious position (or more literally, a belief in the supernatural). If not, then functions and approximations to functions are what the human experience boils down to.
[0] https://en.wikipedia.org/wiki/Universal_approximation_theore...
Deleted Comment
Deleted Comment
let Q dot P = 0 which means Q is perpendicular to P
to choose between R = Q or -Q you want R dot (-p) > 0 because -p points from the line containing p and S(-p) toward the origin so you can just dot product again
It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.
For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.
He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.
Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.
And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.
On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).