The space shuttle situation, though, is a disaster.
The space shuttle situation, though, is a disaster.
What does this mean? The 800,000 previously published articles will stay paywalled and only the new stuff will be open? Or will stuff be open to individuals while institutions have to keep paying? Or what?
https://youtu.be/wo84LFzx5nI?t=823
He mentions Alan Kay about dozen times and uses quotes and dates to create a specific narrative about Smalltalk. That narrative is demonstrably false.
As far as the narrative, probably the clearest expression of Casey's thesis is at https://youtu.be/wo84LFzx5nI?t=6187 "Alan Kay had a degree in molecular biology. ... [he was] thinking of little tiny cells that communicate back and forth but which do not reach across into each other's domain to do different things. And so [he was certain that] that was the future of how we will engineer things. They're going to be like microorganisms where they're little things that we instance, and they'll just talk to each other. So everything will be built that way from the ground up." AFAICT the gist of this is true, Kay was indeed inspired by biological cells and that is why he emphasized message-passing so heavily. His undergraduate degree was in math + bio, not just bio, but close enough.
As far as specific discussion, Casey says, regarding a quote on inheritance: https://youtu.be/wo84LFzx5nI?t=843 "that's a little bit weird. I don't know. Maybe Alan Kay... will come to tell us what he actually was trying to say there exactly." So yeah, Casey has already admitted he has no understanding of Alan Kay's writings. I don't know what else you want.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I would say, if you have to design a good consensus algorithm, PBFT is a much better starting point, and can indeed be scaled down. If you have to run something tomorrow, the majority-vote code probably runs as-is, but doesn't help you with the literature at all. It's essentially the iron triangle - good vs. cheap. In the talk the speaker was clearly aiming for quality above all else.
https://www.youtube.com/watch?v=QjJaFG63Hlo
Also, earlier versions of Smalltalk did not have inheritance. Kay talks about this is his 1993 article on the history of the language:
https://worrydream.com/EarlyHistoryOfSmalltalk/
Dismissing all of this as insignificant quips is ludicrous.
But Python/Julia/Lua are by no means the most natural languages - what is natural is what people write before the LLM, the stuff that the LLM translates into Python. And it is hard to get a good look at these "raw prompts" as the LLM companies are keeping these datasets closely guarded, but from HumanEval and MBPP+ and YouTube videos of people vibe coding and such, it is clear that it is mostly English prose, with occasional formulas and code snippets thrown in, and also it is not "ugly" text but generally pre-processed through an LLM. So from my perspective the next step is to switch from Python as the source language to prompts as the source language - integrating LLM's into the compilation pipeline is a logical step. But, currently, they are too expensive to use consistently, so this is blocked by hardware development economics.
Now as far as how fine-tuning affects model performance, it is pretty simple: improves fit on the fine-tuning data, decreases fit on original training corpus. Beyond that, yeah, it is hard to say if fine-tuning will help you solve your problem. My experience has been that it always hurts generalization, so if you aren't getting reasonable results with a base or chat-tuned model, then fine-tuning further will not help, but if you are getting results then fine-tuning will make it more consistent.
“We” as in the select few countries that have the launch capability and the space tech.
Again a public good is being commoditized and being sold to the highest bidder.