Readit News logoReadit News
lolinder · 7 months ago
So many here are trashing on Ollama, saying it's "just" nice porcelain around llama.cpp and it's not doing anything complicated. Okay. Let's stipulate that.

So where's the non-sketchy, non-for-profit equivalent? Where's the nice frontend for llama.cpp that makes it trivial for anyone who wants to play around with local LLMs without having to know much about their internals? If Ollama isn't doing anything difficult, why isn't llama.cpp as easy to use?

Making local LLMs accessible to the masses is an essential job right now—it's important to normalize owning your data as much as it can be normalized. For all of its faults, Ollama does that, and it does it far better than any alternative. Maybe wait to trash it for being "just" a wrapper until someone actually creates a viable alternative.

chown · 7 months ago
I totally agree with this. I wanted to make it really easy for non-technical users with an app that hid all the complexities. I basically just wanted to embed the engine without making users open their terminal, let alone make them configure. I started with llama.cpp amd almost gave up on the idea before I stumbled upon Ollama, which made the app happen[1]

There are many flaws in Ollama but it makes many things much easier esp. if you don’t want to bother building and configuring. They do take a long time to merge any PRs though. One of my PRs has been waiting for 8 months and there was this another PR about KV cache quantization that took them 6 months to merge.

[1]: https://msty.app

zozbot234 · 7 months ago
> They do take a long time to merge any PRs though.

I guess you have a point there, seeing as after many months of waiting we finally have a comment on this PR from someone with real involvement in Ollama - see https://github.com/ollama/ollama/pull/5059#issuecomment-2628... . Of course this is very welcome news.

smcleod · 7 months ago
That qkv PR was mine! Small world.
washadjeffmad · 7 months ago
>So where's the non-sketchy, non-for-profit equivalent

llama.cpp, kobold.cpp, oobabooga, llmstudio, etc. There are dozens at this point.

And while many chalk the attachment to ollama up to a "skill issue", that's just venting frustration that all something has to do to win the popularity contest is to repackage and market it as an "app".

I prefer first-party tools, I'm comfortable managing a build environment and calling models using pytorch, and ollama doesn't really cover my use cases, so I'm not it's audience. I still recommend it to people who might want the training wheels while they figure out how not-scary local inference actually is.

woadwarrior01 · 7 months ago
> llmstudio

ICYMI, you might want to read their terms of use:

https://lmstudio.ai/terms

evilduck · 7 months ago
> llama.cpp, kobold.cpp, oobabooga

None of these three are remotely as easy to install or use. They could be, but none of them are even trying.

> lmstudio

This is a closed source app with a non-free license from a business not making money. Enshittification is just a matter of when.

Aurornis · 7 months ago
It’s so hard to decipher the complaints about ollama in this comment section. I keep reading comments from people saying they don’t trust it, but then they don’t explain why they don’t trust it and don’t answer any follow up questions.

As someone who doesn’t follow this space, it’s hard to tell if there’s actually something sketchy going on with ollama or if it’s the usual reactionary negativity that happens when a tool comes along and makes someone’s niche hobby easier and more accessible to a broad audience.

bloomingkales · 7 months ago
they don’t explain why they don’t trust it

We need to know a few things:

1) Show me the lines of code that log things and how it handles temp files and storage.

2) No remote calls at all.

3) No telemetry at all.

This is the feature list I would want to begin trusting. I use this stuff, but I also don’t trust it.

traverseda · 7 months ago
>So where's the non-sketchy, non-for-profit equivalent?

Serving models is currently expensive. I'd argue that some big cloud providers have conspired to make egress bandwidth expensive.

That, coupled with the increasing scale of the internet, make it harder and harder for smaller groups to do these kinds of things. At least until we get some good content addressed distributed storage system.

woadwarrior01 · 7 months ago
> Serving models is currently expensive. I'd argue that some big cloud providers have conspired to make egress bandwidth expensive.

Cloudflare R2 has unlimited egress, and AFAIK, that's what ollama uses for hosting quantized model weights.

Deleted Comment

buyucu · 7 months ago
supporting vulkan will help ollama reach the masses who don't have dedicated gpus from nvidia.

this is such a low hanging fruit that it's silly how they are acting.

lolinder · 7 months ago
As has been pointed out in this thread in a comment that you replied to (so I know you saw it) [0], Ollama goes to a lot of contortions to support multiple llama.cpp backends. Yes, their solution is a bit of a hack, but it means that the effort to adding a new back end is substantial.

And again, they're doing those contortions to make it easy for people. Making it easy involves trade-offs.

Yes, Ollama has flaws. They could communicate better about why they're ignoring PRs. All I'm saying is let's not pretend they're not doing anything complicated or difficult when no one has been able to recreate what they're doing.

[0] https://news.ycombinator.com/item?id=42886933

bestcoder69 · 7 months ago
lolinder · 7 months ago
Llamafile is great but solves a slightly different problem very well: how do I easily download and run a single model without having any infrastructure in place first?

Ollama solves the problem of how I run many models without having to deal with many instances of infrastructure.

homebrewer · 7 months ago
It's actually more difficult to use on linux (compared to ollama) because of the weird binfmt contortions you have to go through.
axegon_ · 7 months ago
I think you are missing the point. To get things straight: llama.cpp is not hard to setup and get running. It was a bit of a hassle in 2023 but even then it was not catastrophically complicated if you were willing to read the errors you were getting. People are dissatisfied for two, very valid reasons: ollama gives little to no credit to llama.cpp. The second one is the point of the post: a PR has been open for over 6 months and not a huge PR at that has been completely ignored. Perhaps the ollama maintainers personally don't have use for it so they shrugged it off but this is the equivalent of "it works on my computer". Imagine if all kernel devs used Intel CPUs and ignored every non-intel CPU-related PR. I am not saying that the kernel mailing list is not a large scale version of a countryside pub on a Friday night - it is. But the maintainers do acknowledge the efforts of people making PRs and do a decent job at addressing them. While small, the PR here is not trivial and should have been, at the very least, discussed. Yes, the workstation/server I use for running models uses two Nvidia GPU's. But my desktop computer uses an Intel Arc and in some scenarios, hypothetically, this pr might have been useful.
lolinder · 7 months ago
> To get things straight: llama.cpp is not hard to setup and get running. It was a bit of a hassle in 2023 but even then it was not catastrophically complicated if you were willing to read the errors you were getting.

It's made a lot of progress in that the README [0] now at least has instructions for how to download pre-built releases or docker images, but that requires actually reading the section entitled "Building the Project" to realize that it provides more than just building instructions. That is not accessible to the masses, and it's hard for me to not see that placement and prioritization as an intentional choice to be inaccessible (which is a perfectly valid choice for them!)

And that's aside from the fact that Ollama provides a ton of convenience features that are simply missing, starting with the fact that it looks like with llama.cpp I still have to pick a model at startup time, which means switching models requires SSHing into my server and restarting it.

None of this is meant to disparage llama.cpp: what they're doing is great and they have chosen to not prioritize user convenience as their primary goal. That's a perfectly valid choice. And I'm also not defending Ollama's lack of acknowledgment. I'm responding to a very specific set of ideas that have been prevalent in this thread: that not only does Ollama not give credit, they're not even really doing very much "real work". To me that is patently nonsense—the last mile to package something in a way that is user friendly is often at least as much work, it's just not the kind of work that hackers who hang out on forums like this appreciate.

[0] https://github.com/ggerganov/llama.cpp

portaouflop · 7 months ago
llama.ccp is hard to set up - I develop software for a living and it wasn’t trivial for me. ollama I can give to my non-technical family members and they know how to use it.

As for not merging the PR - why are you entitled to have a PR merged? This attitude of entitlement around contributions is very disheartening as oss maintainer - it’s usually more work to review/merge/maintain a feature etc than to open a PR. Also no one is entitled to comments / discussion or literally one second of my time as an OSS maintainer. This is imo the cancer that is eating open source.

pepijndevos · 7 months ago
ramalama seems to be trying, it's a docker based approach.
airstrike · 7 months ago
lolinder · 7 months ago
> No pre-built binaries yet! Use cargo to try it out

Not an equivalent yet, sorry.

buyucu · 7 months ago
llama.cpp has supported vulkan for more than a year now. For more than 6 months now there has been an open PR to add vulkan backend support for Ollama. However, Ollama team has not even looked at it or commented on it.

Vulkan backends are existential for running LLMs on consumer hardware (iGPUs especially). It's sad to see Ollama miss this opportunity.

Kubuxu · 7 months ago
Don’t be sad for commercial entity that is not a good player https://github.com/ggerganov/llama.cpp/pull/11016#issuecomme...
andy_ppp · 7 months ago
This is great, I did not know about RamaLama and I'll be using and recommending that in future and if I see people using Ollama in instructions I'll recommend they move to RamaLama in the future. Cheers.
bearjaws · 7 months ago
It's hilarious that docker guys are trying to take another OSS and monetize it. Hey if it worked once?...
buyucu · 7 months ago
I was not aware of this context, thanks!
n144q · 7 months ago
Thanks, just yesterday I discovered that Ollama could not use iGPU on my AMD machine, and was going through a long issue for solutions/workarounds (https://github.com/ollama/ollama/issues/2637). Existing instructions are based on Linux, and some people found it utterly surprising that anyone wants to run LLMs on Windows (really?). While I would have no trouble installing Linux and compile from source, I wasn't ready to do that to my main, daily-use computer.

Great to see this.

PS. Have you got feedback on whether this works on Windows? If not, I can try to create a build today.

zozbot234 · 7 months ago
The PR has been legitimately out-of-date and unmergeable for many months. It was forward-ported a few weeks ago, and is now still awaiting formal review and merging. (To be sure, Vulkan support in Ollama will likely stay experimental for some time even if the existing PR is merged, and many setups will need manual adjustment of the number of GPU layers and such. It's far from 100% foolproof even in the best-case scenario!)

For that matter, some people are still having issues building and running it, as seen from the latest comments on the linked GitHub page. It's not clear that it's even in a fully reviewable state just yet.

buyucu · 7 months ago
this pr was reviewable multiple times, rebased multiple times. all because ollama team kept ignoring it. it has been open for almost 7 months now without a single comment from the ollama folks.
ecurtin · 7 months ago
It's gets out of date with conflicts, etc. Because it's ignored, if this was the upstream project of Ollama, llama.cpp the maintainers would have got this merged months ago.
9cb14c1ec0 · 7 months ago
The PR at issue here blocks iGPUs. My fork of the PR changes removes that:

https://github.com/9cb14c1ec0/ollama-vulkan

I successfully ran Phi4 on my AMD Ryzen 7 PRO 5850U iGPU with it.

buyucu · 7 months ago
this is great! I think pufferfish is taking PRs to his fork as well.
Havoc · 7 months ago
ollama was good initially in that it made LLMs more accessible for non-technical people while everyone was figuring things out.

Lately they seem to be contributing mostly confusion to the conversation.

The #1 model the entire world is talking about is literally mislabeled their side. There is no such thing as R1-1.5b. Quantization without telling users also confuses noobs as to what is possible. Setting up an api different from the thing they're wrapping adds chaos. And claiming each feature added llama.cpp as something "ollama now supports" is exceedingly questionable especially when combined with the very sparse acknowledgement that it's a wrapper at all.

Whole thing just doesn't have good vibes

dingocat · 7 months ago
What do you mean there is no such thing as R1-1.5b? DeepSeek released a distilled version based on a 1.5B Qwen model with the full name DeepSeek-R1-Distill-Qwen-1.5B, see chapter 3.2 on page 14 of their research article [0].

[0] https://arxiv.org/abs/2501.12948

trissi1996 · 7 months ago
Which is not the same model, it's not R1 it's R1-Distill-Qwen-1.5B....
Havoc · 7 months ago
ollama labels the qwen models R1, while the "R1" moniker standing on its own in deepseek world means the full model that has nothing to do with qwen.

https://ollama.com/library/deepseek-r1

That may have been ok if it was just same model at different sizes but they're completely different things here & it's created confusion out of thin air for absolutely no reason other than ollama being careless.

the_mitsuhiko · 7 months ago
Ollama needs competition. I’m not sure what drives the people that maintain it but some of their actions imply that there are ulterior motives at play that do not have the benefit of their users in mind.

However such projects require a lot of time and effort and it’s not clear if this project can be forked and kept alive.

Deathmax · 7 months ago
The most recent one of the top of my head is their horrendous aliasing of DeepSeek R1 on their model hub, misleading users into thinking they are running the full model but really anything but the 671b alias is one of the distilled models. This has already led to lots of people claiming that they are running R1 locally when they are not.
TeMPOraL · 7 months ago
The whole DeepSeek-R1 situation gets extra confusing because:

- The distilled models are also provided by DeepSeek;

- There's also dynamic quants of (non-distilled) R1 - see [0]. Those, as I understand it, are more "real R1" than the distilled models, and you can get as low as ~140GB file size with the 1.58-bit quant.

I actually managed to get the 1.58-bit dynamic quant running on my personal PC, with 32GB RAM, at about 0.11 tokens per second. That is, roughly six tokens per minute. That was with llama.cpp via LM Studio; using Vulkan for GPU offload (up to 4 layers for my RTX 4070 Ti with 12GB VRAM :/) actually slowed things down relative to running purely on the CPU, but either way, it's too slow to be useful with such specs.

--

[0] - https://unsloth.ai/blog/deepseekr1-dynamic

adastra22 · 7 months ago
I'm not sure that's fair, given that the distilled models are almost as good. Do you really think Deepseek's web interface is giving you access to 671b? They're going to be running distilled models there too.
blixt · 7 months ago
LM Studio has been around for a long time and does a lot of similar things but with a more UI-based approach. I used to use it before Ollama, and seems it's still going strong. https://lmstudio.ai/
buyucu · 7 months ago
isn't lm stuido closed source?
7thpower · 7 months ago
Can you please explain why you think they may be operating in bad faith?
diggan · 7 months ago
Not parent, but same feeling.

First I got the feeling because of how they store things on disk and try to get all models rehosted in their own closed library.

Second time I got the feeling is when it's not obvious at all about what their motives are, and that it's a for-profit venture.

Third time is trying to discuss things in their Discord and the moderators there constantly shut down a lot of conversation citing "Misinformation" and rewrites your messages. You can ask a honest question, it gets deleted and you get blocked for a day.

Just today I asked why the R1 models they're shipping that are the distilled ones, doesn't have "distilled" in the name, or even any way of knowing which tag is which model, and got the answer "if you don't like how things are done on Ollama, you can run your own object registry" which doesn't exactly inspire confidence.

Another thing I noticed after a while is that there are bunch of people with zero knowledge of terminals that want to run Ollama, even though Ollama is a project for developers (since you do need to know how to run a terminal). Just making the messaging clearer would help a lot in this regarding, but somehow the Ollama team thinks thats gatekeeping and it's better to teach people basic terminal operations.

justinmayer · 7 months ago
Benefiting users is definitely not Ollama’s first priority, as seen when this pull request was summarily closed: https://github.com/jmorganca/ollama/pull/395

Those README changes only served to provide greater transparency to would-be users.

Ulterior motives, indeed.

prabir · 7 months ago
There is https://cortex.so/ that I’m looking forward too.
adastra22 · 7 months ago
Hey thanks, I didn't know about cortex and this looks perfect.
imtringued · 7 months ago
Ollama doesn't really need competition. Llama.cpp just needs a few usability updates to the gguf format so that you can specify a hugging face repository like you can do in vLLM already.
buyucu · 7 months ago
I totally agree that ollama needs competition. They have been doing very sketchy things lately. I wish llama.cpp had an alternative wrapper client like ollama.
Liquix · 7 months ago
agreed. but what's wrong with Jan? does ollama utilize resources/run models more efficiently under the hood? (sorry for the naivete)
benxh · 7 months ago
My biggest gripe with Ollama is the badly named models, e.g. under deepseek-r1, it defaults to the distill models.
buyucu · 7 months ago
I agree they should rename them.

But defaulting to a 671b model is also evil.

rfoo · 7 months ago
No. If you can't run it and most people can never run the model on their laptop, it's fine, let people know the fact, instead of giving them illusion.
trash_cat · 7 months ago
I use Ollama because I am a casual user and can't be bothered to read the docs on how to setup llama.cpp. I just want to run a simple llm locally.

Why would I care about Vulkan?

buyucu · 7 months ago
with vulkan it runs much much faster on consumer hardware, especially opn igpus like intel or amd.
zozbot234 · 7 months ago
Well, it definitely runs faster on external dGPU's. With iGPU's and possibly future NPU's, the pre-processing/"thinking" phase is much faster (because that one is compute-bound) but text generation tends to be faster on CPU because it makes better use of available memory bandwidth (which is the relevant constraint there). iGPU's and NPU's will still be a win wrt. energy use, however.
bdhcuidbebe · 7 months ago
For Intel, OpenVINO should be the preferred route. I dont follow AMD, but Vulkan is just the common denominator here.
sebazzz · 7 months ago
How is the performance of Vulkan vs ROCm on AMD iGPUs? Ollama can be persuaded to run on iGPUs with ROCm.
a12k · 7 months ago
Ollama is sketchy enough that I run it in a VM. Which is odd because it would probably take less effort to just run Llama.cpp directly, but VMs are pretty easy so just went that route.

When I see people bring up the sketchiness most of the time the creator responds with the equivalent of shrugs, which imo increases the sketchiness.

nialv7 · 7 months ago
It's fully open source. I mean yes it uses llama.cpp without giving it credit. But why run it in a VM?
a12k · 7 months ago
It severely over-permissions itself on my Mac.
instagary · 7 months ago
Isn't there a clause in MIT that says you're required to give credit? Also, I didn't know a YC company which started it: https://www.ycombinator.com/companies/ollama.
krowek · 7 months ago
> But why run it in a VM?

Because you don't execute untrusted code in your machine without containerization/virtualization. Don't you?

n144q · 7 months ago
Care to elaborate what "sketchy" refers to here?
nicce · 7 months ago
> but VMs are pretty easy so just went that route.

Don’t you need at least 2 GPUs in that case and put kernel level passthrough?

a12k · 7 months ago
I don’t use GPU. Works fine, but the large Mixtral models are slow.
bdhcuidbebe · 7 months ago
i pass through my dGPU to VM and use iGPU for desktop
buyucu · 7 months ago
ollama advertising llama.cpp features as their own is very dishonest in my opinion.
portaouflop · 7 months ago
That’s the curse and blessing of open source I guess? I have billion dollar companies running my oss software without giving me anything - but do I gripe about it in public forums? Yea maybe sometimes but it never helps to improve the situation.
adastra22 · 7 months ago
Welcome to open source.
mschwaig · 7 months ago
Ollama tries to appeal to a lowest common denominator user base, who does not want to worry about stuff like configuration and quants, or which binary to download.

I think they want their project to be smart enough to just 'figure out what to do' on behalf of the user.

That appeals to a lot of people, but I think them stuffing all backends into one binary and auto-detecting at runtime which to use and is actually a step too far towards simplicity.

What they did to support both CUDA and ROCm using the same binary looked quite cursed last time I checked (because they needed to link or invoke two different builds of llama.cpp of course).

I have only glanced at that PR, but I'm guessing that this plays a role in how many backends they can reasonably try to support.

In nixpkgs it's a huge pain that we configure quite deliberately what we want Ollama to do at build time, and then Ollama runs off and does whatever anyways, and users have to look at log output and performance regressions to know what it's actually doing, every time they update their heuristics for detecting ROCm. It's brittle as hell.

buyucu · 7 months ago
I disagree with this, but it's a reasonable argument. The problem is that the Ollama team has basically ignored the PR, instead of engaging the community. The least they can do is to explain their reasoning.

This PR is #1 on their repo based on multiple metrics (comments, iterations, what have you)