The architecture has to allow for gradient descent to be a viable training strategy, this means no branching (routing is bolted on).
And the training data has to exist, you can't find millions of pages depicting every thought a person went through before writing something. And such data can't exist because most thoughts aren't even language.
Reinforcement learning may seem like the answer here: bruteforce thinking to happen. But it's grossly sample-inefficient with gradient descent and therefore only used for finetuning.
LLM's are regressive models and the configuration that was chosen where every token can only look back allows for very sample-efficient training (one sentence can be dozens of samples).
It would be interesting if in the very distant future, it becomes viable to use advanced brain scans as training data for AI systems. That might be a more realistic intermediate between the speculations into AGI and Uploaded Intelligence.
</scifi>
I have a 2008 Acer, a 2018 Thinkpad, a 2019 HP, a 2024 framework, and a 2024 MacBook.
I can't stand 1080p for personal use anymore, and never in my life on Windows or Linux have I gotten more than 4 hours out of a battery.
Framework competes on repairability, price and OS choice. Pound for pound, MacBook is a much better piece of hardware.
Now I’m not sure whether to install Linux on it (I’ve used Linux as my main OS before the Mac), or try to downgrade to Catalina again and just build whatever software I need from source.
Please note that even GNU themselves require you to do this, see e.g. GNU Emacs which requires copyright assignment to the FSF when you submit patches. So there are legitimate reasons to do this other than being able to close the source later.
You can just use the AGENTS.md file as an index pointing to other doc files.
This example does that -
Having an in-your-face file that links to a hidden file serves no purpose.
(I’m currently using Org-mode, but the approach is the same.)
Nim fixes many of the issues I had with Python. First, I can now make games with Nim because it’s super fast and easily interfaces with all of the high performance OS and graphics APIs. Second, typos no longer crash in production because the compiler checks everything. If it complies it runs. Finally, refactors are easy, because the compiler practically guides you through them. The cross compiling story is great you can compile to JS on the front end. You can use pytorch and numpy from Nim. You can write CUDA kernels in Nim. It can do everything.
See: https://www.reddit.com/r/RedditEng/comments/yvbt4h/why_i_enj...