Readit News logoReadit News

Deleted Comment

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
danielhanchen · 9 months ago
WAIT so GPU offloading is on by DEFAULT? Oh my fantastic! For now I have to "guess" via a Python script - ie I sum sum up all the .gguf split files in filesize, then detect CUDA memory usage, and specify approximately how many GPUs ie --device CUDA0,CUDA1 etc
ngxson · 9 months ago
Ahhh no sorry I forgot that the actual code controlling this is inside llama-model.cpp ; sorry for the misinfo, the -ngl only set to max by default if you're using Metal backend

(See the code in side llama_model_default_params())

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
danielhanchen · 9 months ago
:)) I did have to update the chat template for Mistral - I did see your PR in llama.cpp for it - confusingly the tokenizer_config.json file doesn't have a chat_template, and it's rather in chat_template.jinja - I had to move the chat template into tokenizer_config.json, but I guess now with your fix its fine :)
ngxson · 9 months ago
Ohhh nice to know! I was pretty sure that someone already tried to fix the chat template haha, but because we also allow users to freely create their quants via the GGUF-my-repo space, I have to fix the quants produces from that source
ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
ngxson · 9 months ago
We also support SmolVLM series which delivers light-speed response thanks to its mini size!

This is perfect for real-time home video surveillance system. That's one of the ideas for my next hobby project!

    llama-server -hf ggml-org/SmolVLM-Instruct-GGUF
    llama-server -hf ggml-org/SmolVLM-256M-Instruct-GGUF
    llama-server -hf ggml-org/SmolVLM-500M-Instruct-GGUF
    llama-server -hf ggml-org/SmolVLM2-2.2B-Instruct-GGUF
    llama-server -hf ggml-org/SmolVLM2-256M-Video-Instruct-GGUF
    llama-server -hf ggml-org/SmolVLM2-500M-Video-Instruct-GGUF

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
danielhanchen · 9 months ago
OH WHAT! So just -ngl? Oh also do you know if it's possible to auto do 1 GPU then the next (ie sequential) - I have to manually set --device CUDA0 for smallish models, and probs distributing it amongst say all GPUs causes communication overhead!
ngxson · 9 months ago
Ah no I mean we can omit the whole "-ngl N" argument for now, as it is internally set to -1 by default in CPP code (instead of being 0 traditionally), and -1 meaning offload everything to GPU

I have no idea how to specify custom layer specs with multi GPU, but that is interesting!

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
simonw · 9 months ago
llama.cpp offers compiled releases for multiple platforms. This release has the new vision features: https://github.com/ggml-org/llama.cpp/releases/tag/b5332

On macOS I downloaded the llama-b5332-bin-macos-arm64.zip file and then had to run this to get it to work:

  unzip llama-b5332-bin-macos-arm64.zip
  cd build/bin
  sudo xattr -rd com.apple.quarantine llama-server llama-mtmd-cli *.dylib
Then I could run the interactive terminal (with a 3.2GB model download) like this (borrowing from https://news.ycombinator.com/item?id=43943370R)

  ./llama-mtmd-cli -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
Or start the localhost 8080 web server (with a UI and API) like this:

  ./llama-server -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
I wrote up some more detailed notes here: https://simonwillison.net/2025/May/10/llama-cpp-vision/

ngxson · 9 months ago
And btw, -ngl is automatically set to max value now, you don't need to -ngl 99 anymore!

Edit: sorry this is only true on Metal. For CUDA or other GPU backends, you still need to manually specify -ngl

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
danielhanchen · 9 months ago
By the way - fantastic work again on llama.cpp vision support - keep it up!!
ngxson · 9 months ago
Thanks Daniel! Kudos for your great work on quantization, I use the Mistral Small IQ2_M from unsloth during development and it works very well!!
ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
simonw · 9 months ago
llama.cpp offers compiled releases for multiple platforms. This release has the new vision features: https://github.com/ggml-org/llama.cpp/releases/tag/b5332

On macOS I downloaded the llama-b5332-bin-macos-arm64.zip file and then had to run this to get it to work:

  unzip llama-b5332-bin-macos-arm64.zip
  cd build/bin
  sudo xattr -rd com.apple.quarantine llama-server llama-mtmd-cli *.dylib
Then I could run the interactive terminal (with a 3.2GB model download) like this (borrowing from https://news.ycombinator.com/item?id=43943370R)

  ./llama-mtmd-cli -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
Or start the localhost 8080 web server (with a UI and API) like this:

  ./llama-server -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
I wrote up some more detailed notes here: https://simonwillison.net/2025/May/10/llama-cpp-vision/

ngxson · 9 months ago
For brew users, you can specify --HEAD when installing the package. This way, brew will automatically build the latest master branch.

Btw, the brew version will be updated in the next few hours, so after that you will be able to simply "brew upgrade llama.cpp" and you will be good to go!

ngxson commented on Vision Now Available in Llama.cpp   github.com/ggml-org/llama... · Posted by u/redman25
nico · 9 months ago
How does this compare to using a multimodal model like gemma3 via ollama?

Any benefit on a Mac with apple silicon? Any experiences someone could share?

ngxson · 9 months ago
Two things:

1. Because the support in llama.cpp is horizontal integrated within ggml ecosystem, we can optimize it to run even faster than ollama.

For example, pixtral/mistral small 3.1 model has some 2D-RoPE trick that use less memory than ollama's implementation. Same for flash attention (which will be added very soon), it will allow vision encoder to run faster while using less memory.

2. llama.cpp simply support more models than ollama. For example, ollama does not support either pixtral or smolvlm

ngxson commented on Promising results from DeepSeek R1 for code   simonwillison.net/2025/Ja... · Posted by u/k__
ngxson · a year ago
Hi I'm Xuan-Son,

Small correct, I'm not just asking it to convert ARM NEON to SIMD, but for the function handling q6_K_q8_K, I asked it to reinvent a new approach (without giving it any prior examples). The reason I did that was because it failed writing this function 4 times so far.

And a bit of context here, I was doing this during my Sunday and the time budget is 2 days to finish.

I wanted to optimize wllama (wasm wrapper for llama.cpp that I maintain) to run deepseek distill 1.5B faster. Wllama is totally a weekend project and I can never spend more than 2 consecutive days on it.

Between 2 choices: (1) to take time to do it myself then maybe give up, or (2) try prompting LLM to do that and maybe give up (at worst, it just give me hallucinated answer), I choose the second option since I was quite sleepy.

So yeah, turns out it was a great success in the given context. Just does it job, saves my weekend.

Some of you may ask, why not trying ChatGPT or Claude in the first place? Well, short answer is: my input is too long, these platforms straight up refuse to give me the answer :)

u/ngxson

KarmaCake day47August 18, 2023
About
System developer
View Original