Check it out here: https://models.hathora.dev/model/qwen3-omni
Was it being closed weight obvious to you from the article? Trying to understand why I was confused. Had not seen the "Flash" designation before
Also 30B models can beat a semi-recent 235B with just some additional training?
For the evals it's probably just trained on a lot of the benchmark adjacent datasets compared to the 235B model. Similar thing happened on other model today: https://x.com/NousResearch/status/1998536543565127968 (a 30B model trained specifically to do well in maths get near SOTA scores)
I've seen it in their online materials too but can't seem to find it now.
Their benchmark table shows it beating Qwen3-235B-A22B
Does "Flash" in the name of a Qwen model indicate a model-as-a-service and not open weights?
Are there any open weight models that do? Not talking about speech to text -> LLM -> text to speech btw I mean a real voice <-> language model.
edit:
It does support real-time conversation! Has anybody here gotten that to work on local hardware? I'm particularly curious if anybody has run it with a non-nvidia setup.
* Hybrid MoE: 2-3x faster than pure MoE transformers
* 1M context length
* Trained on NVFP4
* Open Source! Pretraining, mid-training, SFT and RL dataset released (SFT HF link is 404...)
* Open model training recipe (coming soon)
Really appreciate Nvidia being the most open lab but they really should make sure all the links/data are available on day 0.
Also interesting that the model is trained in NVFP4 but the inference weights are FP8.