He (almost) single-handedly brought LLMs to the masses.
With the latest news of some AI engineers' compensation reaching up to a billion dollars, feels a bit unfair that Georgi is not getting a much larger slice of the pie.
Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it. The team is great, we just have features we want to build, and want to implement the models directly in Ollama. (We do use GGML and ask partners to help it. This is a project that also powers llama.cpp and is maintained by that same team)
1. set up your dozens of /\.?claude.*\.(json|md)/i dotfiles? 2. give insanely detailed prompts that took longer to write than the code itself? 3. Turn on auto-accept so that you can only review code in one giant chunk in diff, therefore disallowing you to halt any bad design/errors during the first shot?"
> ...easy to hit the 5 hour window limit in just 2 hours
I've had this experience. Sucks especially when you're working in a monorepo because you have client/server that both need to stay in context.
I love this. It’s no surprise that OSS projects need the occasional backlog grooming.
> But I've found this page to be downright helpful in most cases.
Perhaps you meant to say “UNhelpful”?
Yes, thanks for pointing it out!