This is an admirably ambitious project. I'm not sure why it needs to be an OS, when the three components don't seem to have much to do with each other. You might get more traction pitching this as 3 separate projects.
That's fair feedback. They actually did start as separate experiments. The reason I bundled them into one OS is that the real value comes from the integration.
But you raise a good point; I should probably document the integration story better. Thanks.
Not LaTeX. Flux has its own grammar. It tokenizes Unicode math symbols like ² directly into AST nodes.
The shell doesn't talk to the LLM directly. They're separate processes. Alexitha monitors system state via cgroup events and adjusts scheduler weights. Flux is just the user-facing shell. They're connected through Tenet (the scheduler), not through a direct pipe.
Yes, the LLM is swappable. Alexitha is currently a fine-tuned 7B model, but the interface is not model specific. Any model that can read a cgroup event stream and output a scheduling decision can be slot in. I'm planning to test with smaller models (1-3B) to reduce boot overhead.
How easy do you find Unicode input? Isn't "x^2" or "x**2" (Python) much easier to type than "x²" ? In the latter case, I have to lookup the char code for ², which happens to be U+0082 ("Superscript two")
But you raise a good point; I should probably document the integration story better. Thanks.
The shell doesn't talk to the LLM directly. They're separate processes. Alexitha monitors system state via cgroup events and adjusts scheduler weights. Flux is just the user-facing shell. They're connected through Tenet (the scheduler), not through a direct pipe.
Yes, the LLM is swappable. Alexitha is currently a fine-tuned 7B model, but the interface is not model specific. Any model that can read a cgroup event stream and output a scheduling decision can be slot in. I'm planning to test with smaller models (1-3B) to reduce boot overhead.
Dead Comment