This is a rather small userbase when it comes to enterprise.
Especially because Swift will never be as versatile as Python or as efficient as Rust.
And then there's also Go, C# and Kotlin with much better tooling.
I've built a macOS assistant too (more advanced though), with focus on privacy and easy of use (https://getfluid.app). I'd love to open-source it, but not sure about sustainability of such business model. Right now I'm experimenting with a fully private paid Llama hosting (for GPU poors).
Good luck :)
Roadmap is following:
- October - private remote AI (when you need smarter AI than your machine can handle, but don't want your data to be logged or stored anywhere)
- November - Web search capabilities (so the AI will be capable of doing websearch out of the box)
- December - PDF, docs, code embedding. 2025 - tighter MacOS integration with context awareness.
https://future.mozilla.org/builders/news_insights/introducin...
https://github.com/Mozilla-Ocho/llamafile
They even have whisperfiles now, which is the same thing but for whisper.cpp, aka real-time voice transcription.
You can also take this a step further and use this exact setup for a local-only co-pilot style code autocomplete and chat using Twinny. I use this every day. It's free, private, and offline.
https://github.com/twinnydotdev/twinny
Local LLMs are the only future worth living in.
Sorry for the blatant ad, though I do hope it's useful for some ppl reading this thread: https://getfluid.app
Deleted Comment
Deleted Comment