For instance, they'd take a prisoner out, film the prologue to an execution video, unsheathe a knife or rack a shotgun, and then... they wouldn't perform the execution itself. "It's just for show," they'd tell their target, as they return him to his cell. Then, one day, after calmly filming yet another prologue, they'd swiftly and gruesomely execute their prisoner, who had been lulled into a certain docility and wasn't expecting them to actually go through with it.
As far as I know all of the astronauts were military at that time, so they probably would have been covered by this program. There could be any number of nuances I'm not aware of though.
[1]https://benefits.va.gov/benefits/infographics/pdfs/timeline_...
Complete Whisper ecosystem (99+ languages, word timestamps, any audio format) 23 embedding models across 13 families (E5, ModernBERT, Arctic, etc.) Mistral Small 24B with vision capabilities OpenAI-compatible API that's actually faster than Ollama on Apple Silicon
The goal was simple: I wanted to use my Mac Mini/Studio as proper inference servers without the complexity of managing Python environments or paying for cloud APIs while keeping data local. It's packaged as a native macOS app (no Python install needed) with a beautiful web GUI for model management. The API is drop-in compatible with OpenAI, so existing apps like Jan.ai work immediately. 900+ lines of tests ensure production reliability. G
NU GPL v3 licensed and actively maintained. GitHub: https://github.com/RamboRogers/mlx-gui
Would love feedback from the community - especially on the embedding pipeline and audio processing!
https://dl.acm.org/doi/10.1145/1456625.1456635