1) Why wasn't OpenAI doing it themselves?
2) This means we've reached technological singularity if AI models can improve themselves (as in getting a smarter model, not just compressing existing ones like Deepseek)
If we see a real world application that a business actually uses, or that people want to use, that's great. But why announce the prototype with the lab demos? It's premature. Better to wait until you have a good real life working use case to brag about.
Lol, you need to drive up hype and convince investors you are not falling behind. Not even being cynical here, I think it's a good idea from a business perspective.
People say this often, and I don't get it. Sanderson is a terrible writer and his works and themes are nothing like ASOIAF. His only merit he has is "finishes books quickly". He's not even a fan of the books, he says they're "too dark" for him. He's the worst possible choice!
But I don't want anyone to finish ASOIAF if George dies before it's over. It's his story and no one else can do it right. We did this already and everyone hated it!
Key corrections:
Ollama GPU usage - I was wrong. It IS using GPU (verified 96% utilization). My "CPU-optimized backend" claim was incorrect.
FP16 vs BF16 - enum caught the critical gap: I trained with BF16, tested inference with FP16 (broken), but never tested BF16 inference. "GPU inference fundamentally broken" was overclaimed. Should be "FP16 has issues, BF16 untested (likely works)."
llama.cpp - veber-alex's official benchmark link proves it works. My issues were likely version-specific, not representative.
ARM64+CUDA maturity - bradfa was right about Jetson history. ARM64+CUDA is mature. The new combination is Blackwell+ARM64, not ARM64+CUDA itself.
The HN community caught my incomplete testing, overclaimed conclusions, and factual errors.
Ship early, iterate publicly, accept criticism gracefully.
Thanks especially to enum, veber-alex, bradfa, furyofantares, stuckinhell, jasonjmcghee, eadwu, and renaudr. The article is significantly better now.