> llama.cpp has been just fine for me.
Of course, so you really shouldn't use Ollama then.
Ollama isn't a hobby project anymore, they were the only ones at the table with OpenAI many months before the release of GPT-OSS. I honestly don't think they care one bit about the community drama at this point. We don't have to like it, but I guess now they get to shape the narrative. That's their stance, and likely the stance of their industry partners too. I'm just the messenger.
In the spirit of TFA:
This isn't true, at all. I don't know where the idea comes from.
You've been repeating this claim frequently. You were corrected on this 2 hours ago. llama.cpp had early access to it just as well.
It's bizarre for several reasons:
1. It is a fantasy that engineering involves seats at tables and bands of brothers growing from a hobby to a ???, one I find appealing and romantic. But, fantasy nonetheless. Additionally, no one mentioned or implied anything about it being a hobby or unserious.
2. Even if it wasn't a fantasy, it's definitely not what happened here. That's what TFA is about, ffs.
No heroics, they got the ultimate embarrassing thing that can happen to a project piggybacking on FOSS: ollama can't work with the materials OpenAI put out to help ollama users because llama.cpp and ollama had separate day 1 landings of code, and ollama has 0 path to forking literally the entire community to use their format. They were working so loosely with OpenAI that OpenAI assumed they were being sane and weren't attempting to use it as an excuse to force a community fork of GGUF and no one realized until after it shipped.
3. I've seen multiple comments from you this afternoon spiking out odd narratives about Ollama and llama.cpp, that don't make sense at their face from the perspective of someone who also deps on llama.cpp. AFAICT you understood the GGML fork as some halcyon moment of freedom / not-hobbiness for a project you root for. That's fine. Unfortunately, reality is intruding, hence TFA. Given you're aware, it makes your humbleness re: knowing whats going on here sound very fake, especially when it precedes another rush of false claims.
4. I think at some point you owe it to even yourself, if not the community, to take a step back and slow down on the misleading claims. I'm seeing more of a gish-gallop than an attempt to recalibrate your technical understanding.
It's been almost 2 hours since you claimed you were sure there were multiple huge breakages due to bad code quality in llama.cpp, and here, we see you reframe that claim as a much weaker one someone else made to you vaguely.
Maybe a good first step to avoiding information pollution here would be to invest time spent repeating other peoples technical claims you didn't understand, and find some of those breakages you know for sure happened, as promised previously.
In general, I sense a passionate but youthful spirit, not an astro-turfer, and this isn't a group of professionals being disrespected because people still think they're a hobby project. Again, that's what the article is about.
It’s only when you need to apply it to domains outside of code, or a domain where it needs to actually reason, that it becomes an issue.