Also, we've open-sourced RCLI, the fastest end-to-end voice AI pipeline on Apple Silicon. Mic to spoken response, entirely on-device. No cloud, no API keys.
To get started:
brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
brew install rcli
rcli setup # downloads ~1 GB of models
rcli # interactive mode with push-to-talk
Or: curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
The numbers (M4 Max, 64 GB, reproducible via `rcli bench`):LLM decode – 1.67x faster than llama.cpp, 1.19x faster than Apple MLX (same model files): - Qwen3-0.6B: 658 tok/s (vs mlx-lm 552, llama.cpp 295) - Qwen3-4B: 186 tok/s (vs mlx-lm 170, llama.cpp 87) - LFM2.5-1.2B: 570 tok/s (vs mlx-lm 509, llama.cpp 372) - Time-to-first-token: 6.6 ms
STT – 70 seconds of audio transcribed in *101 ms*. That's 714x real-time. 4.6x faster than mlx-whisper.
TTS – 178 ms synthesis. 2.8x faster than mlx-audio and sherpa-onnx.
We built this because demoing on-device AI is easy but shipping it is brutal. Voice is the hardest test: you're chaining STT, LLM, and TTS sequentially, and if any stage is slow, the user feels it. Most teams fall back to cloud APIs not because local models are bad, but because local inference infrastructure is.
The thing that's hard to solve is latency compounding. In a voice pipeline, you're stacking three models in sequence. If each adds 200ms, you're at 600ms before the user hears a word, and that feels broken. You can't optimize one stage and call it done. Every stage needs to be fast, on one device, with no network round-trip to hide behind.
We went straight to Metal. Custom GPU compute shaders, all memory pre-allocated at init (zero allocations during inference), and one unified engine for all three modalities instead of stitching separate runtimes together.
MetalRT is the first engine to handle all three modalities natively on Apple Silicon. Full methodology:
LLM benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...
Speech benchmarks: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...
How: Most inference engines add layers between you and the GPU: graph schedulers, runtime dispatchers, memory managers. MetalRT skips all of it. Custom Metal compute shaders for quantized matmul, attention, and activation - compiled ahead of time, dispatched directly.
Voice Pipeline optimizations details: https://www.runanywhere.ai/blog/fastvoice-on-device-voice-ai... RAG optimizations: https://www.runanywhere.ai/blog/fastvoice-rag-on-device-retr...
RCLI is the open-source voice pipeline (MIT) built on MetalRT: three concurrent threads with lock-free ring buffers, double-buffered TTS, 38 macOS actions by voice, local RAG (~4 ms over 5K+ chunks), 20 hot-swappable models, and a full-screen TUI with per-op latency readouts. Falls back to llama.cpp when MetalRT isn't installed.
Source: https://github.com/RunanywhereAI/RCLI (MIT)
Demo: https://www.youtube.com/watch?v=eTYwkgNoaKg
What would you build if on-device AI were genuinely as fast as cloud?
How does the RAG fit in, a voice-to-RAG seems a bit random as a feature?
I don’t mean to come across as dismissive, I’m genuinely confused as to what you’re offering.
Right now, our focus is Apple Silicon.
Today there are two parts:
MetalRT - our proprietary inference engine for Apple Silicon. It speeds up local LLM, speech-to-text, and text-to-speech workloads. We’re expanding model coverage over time, with more modalities and broader support coming next.
RCLI - our open-source CLI that shows this in practice. You can talk to your Mac, query local docs, and trigger actions, all fully on-device.
So the simplest way to think about us is: we’re building the runtime / infrastructure layer for on-device AI, and RCLI is one example of what that enables.
Longer term, we want to bring the same approach to more chips and device types, not just Apple Silicon.
For people asking whether the speedups are real, we’ve published our benchmark methodology and results here: LLM: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e... Speech: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...
[0] https://github.com/trymirai/uzu
These 0.6B-4B models are, frankly, just amusing curiosities. But commonly regarded as too error prone for any non-demo work.
The reason why people are buying Apple Silicon today is because the unified memory allows them to run larger models that are cost prohibitive to run otherwise (usually requiring Nvidia server GPUs). It would be much more interesting to see benchmarks for things like Qwen3.5-122B-A10B, GLM-5, or any dense model is the 20b+ range. Thanks.
Deleted Comment
Dead Comment
RunAnywhere is an inference company. We build the runtime layer for on-device AI.
There are two pieces:
MetalRT, a proprietary GPU inference engine for Apple Silicon. It runs LLMs, speech-to-text, and text-to-speech faster than anything else available (benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...). This is our core product.
RCLI, an open-source CLI (MIT) that demonstrates what MetalRT enables. It wires STT + LLM + TTS into a real voice pipeline with 43 macOS actions, local RAG, and a TUI. Think of it as the reference application built on top of the engine.
On RAG specifically: voice + document Q&A is a natural pairing for on-device use cases. You have sensitive documents you don't want to upload to the cloud, you ingest them locally, and then ask questions by voice. The retrieval runs at ~4ms over 5K+ chunks, so it feels instant in the voice pipeline. Its not random, it's one of the strongest privacy arguments for running everything locally.
The longer-term vision is bringing MetalRT to more chips and platforms, so any developer can get cloud-competitive inference on-device with minimal integration effort.
Seems pretty clear. You can supply documents to the model as input and then verbally ask questions about them.
Quick request: unsloth quants; bit per bit usually better. Or more generally UI for huggingface model selections. I understand you won't be able to serve everything, but I want to mix and match!
Also - grounding:
"open safari" (safari opens, voice says: "I opened safari") "navigate to google.com in safari" (nothing happens, voice says: "I navigated to google.com")
Anyway, really fun.
On unsloth quants: agreed, they're consistently better bit-for-bit. Adding broader quantization format support (including unsloth's approach) is on the roadmap. Right now MetalRT works with MLX 4-bit files and GGUF Q4_K_M, we want to expand that.
On the grounding issue ("navigate to google.com" not actually navigating): you're right, that's a gap. The "open_url" action exists but the LLM doesn't always route to it correctly, especially with compound commands. Small models (0.6B-1.2B) have limited tool-calling accuracy, upgrading to Qwen3.5 4B via rcli upgrade-llm helps significantly. We're also improving the action routing prompts.
Appreciate the detailed feedback, this is exactly what we need.
So you’re describing a core broken feature. Application breaking at easiest test.
This is a known limitation with small LLMs (0.6B-1.2B) doing tool calling. They sometimes confuse "I know what you want" with "I did it." Upgrading to a larger model improves tool-calling accuracy significantly.
We're also working on verification, having the pipeline confirm the action actually succeeded before reporting back. Thats a fair expectation and we should meet it.
Deleted Comment
Feel free to open a PR or issue on the RCLI repo and we'll coordinate.
What about MetalRT's relationship to llama.cpp, onnx, MLX, transformers, etc? Is MetalRT a replacement for those? Designed to be compatible with a wide variety of model formats? Or are you just providing an abstraction on top of these?
I think this has to be the future for AI tools to really be truly useful. The things that are truly powerful are not general purpose models that have to run in the cloud, but specialized models that can run locally and on constrained hardware, so they can be embedded.
I'd love to see this able to be added in-path as an audio passthrough device so you can add on-device native transcriptioning into any application that does audio, such as in video conferencing applications.
MetalRT's STT numbers make this feasible: 70 seconds of audio transcribed in 101ms means you could process audio chunks in real-time with massive headroom. The latency would be imperceptible.
We haven't built this yet but it's a compelling use case. CoreAudio supports virtual audio devices (aggregate devices) that could pipe audio through the pipeline. If anyone in this thread has experience building macOS audio HAL plugins and wants to collaborate, we're very open to contributions, RCLI is MIT.
I have never written audio drivers on macOS, but maybe something worth exploring to see if I can make this happen. I really appreciate high quality AI transcripts in my meetings, but right now only Webex has good transcriptioning, and a lot of meetings use other services like MS Teams, Zoom, Meet, et al.
Before I install, is there any telemetry enabled here or is this entirely local by default?
Dead Comment
Expanding to larger models (7B, 14B, 32B) on machines with more unified memory is on the roadmap. The Mac Studio with 192GB would be an interesting target, a 32B model at 4-bit would fit comfortably and MetalRT's architectural advantages (fused kernels, minimal dispatch overhead) should scale well.
What model / use case are you thinking about? That helps us prioritize.
Either way, this is a tremendous achievement and it's extremely relevant in the OpenClaw world where I might not want to have sensitive information leave my computer.
MetalRT is built on the public Metal API. The performance comes from how we use the GPU, not from accessing anything Apple doesn't document.
We specifically chose to stay on public APIs so that MetalRT works on any Apple Silicon Mac without special entitlements or SIP workarounds. This also means its App Store compatible for future macOS/iOS distribution.
The results speak for themselves: 1.1-1.19x faster than Apple's own MLX on identical model files, 4.6x faster on STT, 2.8x faster on TTS. Full methodology published here: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...
Appreciate the kind words, the "OpenClaw world" framing is exactly why we built this.