Readit News logoReadit News
olokobayusuf commented on Kubetorch – For RL and ML on Kubernetes   run.house/blog/announcing... · Posted by u/py_py
olokobayusuf · 4 months ago
Congrats on the launch!
olokobayusuf commented on Launch HN: LlamaFarm (YC W22) – Open-source framework for distributed AI   github.com/llama-farm/lla... · Posted by u/mhamann
mhamann · 4 months ago
Oh! Muna looks cool as well! I've just barely glanced at your docs page so far, but I'm definitely going to explore further. One of the biggest issues in the back of our minds is getting models running on a variety of hardware and platforms. Right now, we're just using Ollama with support for Lemonade coming soon. But both of these will likely require some manual setup before deploying LlamaFarm.
olokobayusuf · 4 months ago
We should collab! We prefer to be the underlying infrastructure behind the scenes, and have a pretty holistic approach towards hardware coverage and performance optimization.

Read more:

- https://blog.codingconfessions.com/p/compiling-python-to-run... - https://docs.muna.ai/predictors/ai#inference-backends

olokobayusuf commented on Launch HN: LlamaFarm (YC W22) – Open-source framework for distributed AI   github.com/llama-farm/lla... · Posted by u/mhamann
singlepaynews · 4 months ago
Very cool. I jumped in here thinking it was gonna be something else though: a packaged service for distributing on-prem model running across multiple GPUs.

I'm basically imagining a vast.ai type deployment of an on-prem GPT; assuming that most infra is consumer GPUs on consumer devices, the idea of running the "company cluster" as combined compute of the company's machines

olokobayusuf · 4 months ago
We're building something closer to this at Muna: https://docs.muna.ai . Check us out and let me know what you think!
olokobayusuf commented on Launch HN: LlamaFarm (YC W22) – Open-source framework for distributed AI   github.com/llama-farm/lla... · Posted by u/mhamann
olokobayusuf · 4 months ago
This is super interesting! I'm the founder of Muna (https://docs.muna.ai) with much of the same underlying philosophy, but a different approach:

We're building a general purpose compiler for Python. Once compiled, developers can deploy across Android, iOS, Linux, macOS, Web (wasm), and Windows in as little as two lines of code.

Congrats on the launch!

olokobayusuf commented on Python developers are embracing type hints   pyrefly.org/blog/why-type... · Posted by u/ocamoss
franktankbank · 5 months ago
I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.
olokobayusuf · 5 months ago
Yup that's true. We do benefit from massive efficiencies though, thanks to LLM codegen.
olokobayusuf commented on Python developers are embracing type hints   pyrefly.org/blog/why-type... · Posted by u/ocamoss
franktankbank · 5 months ago
Have you talked to anyone about where this flat out will not work? Obviously it will work in simple cases but someone with good language understanding will probably be able to point out cases where it just won't. I didn't read your blog so apologies if this is covered. How does this compiler fit into your company business plan?
olokobayusuf · 5 months ago
Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.

It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.

Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).

olokobayusuf commented on Python developers are embracing type hints   pyrefly.org/blog/why-type... · Posted by u/ocamoss
olokobayusuf · 5 months ago
I'm founding a company that is building an AOT compiler for Python (Python -> C++ -> object code) and it works by propagating type information through a Python function. That type propagation process is seeded by type hints on the function that gets compiled:

https://blog.codingconfessions.com/i/174257095/lowering-to-c...

olokobayusuf commented on Ask HN: Who is hiring? (June 2025)    · Posted by u/whoishiring
olokobayusuf · 8 months ago
Function (https://fxn.ai) | Remote (US)

We're building native code generation for AI developers. We generate high-performance C++/Rust to power open-source and on-device AI for our customers. We have customers ranging from early stage startups to the Fortune 1000.

You'll be:

1. Writing open-source Python functions that run popular vision models and LLMs; or

2. Writing high-performance C++ and Rust code that targets different accelerators (CUDA, Metal, etc); or

3. Writing parts of our Python-to-C++ compiler in support of (1) and (2); or

4. Some combination thereof.

Join the party: Email us at stdin@fxn.ai or apply at https://app.dover.com/jobs/fxn.

No recruiters; no visa sponsorship (yet). We prize demonstrated curiosity and impact over everything else.

u/olokobayusuf

KarmaCake day174March 18, 2018View Original