Readit News logoReadit News
cheesyFish commented on Workflows 1.0: A Lightweight Framework for Agentic systems   llamaindex.ai/blog/announ... · Posted by u/cheesyFishes
lyjackal · 2 months ago
I notice that the python versions and typescript versions are pretty different. Python is sort of class based, with python magic decorators

    class MyWorkflow(Workflow):
        @step
        async def start(self, ctx: Context, ev: StartEvent) -> MyEvent:
            num_runs = await ctx.get("num_runs", default=0)
whereas TS is sort of builder/function based

    import { createWorkflow } from "@llamaindex/workflow-core";
    
    const convertEvent = workflowEvent();
    
    const workflow = createWorkflow();
    
    workflow.handle([startEvent], (start) => {
      return convertEvent.with(Number.parseInt(start.data, 10));

Is there reason for this? });

cheesyFish · 2 months ago
yea good callout -- python workflows came first, and while we could have directly translated these, the ergonomics around classes in python are not exactly what JS/TS devs expect.

So instead, the goal was to capture the spirit of event-driven workflows, and implement them in a more TS-native way and improve the dev-ux for those developers. This means it might be harder to jump between the two, but I'd argue most people are not doing that anyways.

cheesyFish commented on Python Tooling at Scale: LlamaIndex’s Monorepo Overhaul   llamaindex.ai/blog/python... · Posted by u/cheesyFish
esafak · 3 months ago
Was bazel an option?
cheesyFish · 3 months ago
We used pants initially (which I believe is similar to bazel). And indeed the dependency graphing it does was very helpful, but other parts of the tool motivated us to create something more bespoke and debuggable (we were only using like 20% or less of the features pants offers)
cheesyFish commented on Python Tooling at Scale: LlamaIndex’s Monorepo Overhaul   llamaindex.ai/blog/python... · Posted by u/cheesyFish
esafak · 3 months ago
I use Github Actions triggers to pass flags to a monorepo dagger script to build and test the affected components. For example, if a commit touches the front- and back ends, rebuild both. If it only touches the front end, run integration tests using the latest backend without rebuilding it.

edit: spell out GHA

cheesyFish · 3 months ago
Yea this definitely makes sense for smaller monorepos. For us, we ended up writing our own dependency graph parser to figure out what tests to run (which is easy enough with a single language like python honestly)
cheesyFish commented on Python Tooling at Scale: LlamaIndex’s Monorepo Overhaul   llamaindex.ai/blog/python... · Posted by u/cheesyFish
tuanacelik · 3 months ago
So just to let me get this straight: Does this new setup aim to make it easier to contribute to llamaindex submodules specifically?
cheesyFish · 3 months ago
Yes! For example, previously with pants, users would hit a lot of weird errors since how tests run with pants is different than running tests locally with pytest

We did not expect users to learn pants, but this often meant a lot of back and forth with maintainers to get PR tests working.

Should be much easier now!

cheesyFish commented on Python Tooling at Scale: LlamaIndex’s Monorepo Overhaul   llamaindex.ai/blog/python... · Posted by u/cheesyFish
lyjackal · 3 months ago
I recently did something similar. Using uv workspaces, I used the uv CLI's dependency graph to analyze the dependency tree then conditionally trigger CI workflows for affected projects. I wish there was a better way to access the uv dependency worktree other than parsing the `tree` like output
cheesyFish · 3 months ago
I agree! I hope uv introduces more tools for monorepos or refines the workspaces concept.

I saw workspaces require all dependencies to agree with eachother, which isn't quite possible in our repo

cheesyFish commented on Llama-agents: an async-first framework for building production ready agents   github.com/run-llama/llam... · Posted by u/pierre
williamdclt · a year ago
I must be missing something: isn’t this just describing a queue? The fact that the workload is a LLM seems irrelevant, it’s just async processing of jobs?
cheesyFish · a year ago
It being a queue is one part of it yes. But the key is trying to provide tight integrations and take advantage of agentic features. Stuff like the orchestrator, having an external service to execute tools, etc.
cheesyFish commented on Llama-agents: an async-first framework for building production ready agents   github.com/run-llama/llam... · Posted by u/pierre
ldjkfkdsjnv · a year ago
These types of frameworks will become abundant. I personally feel that the integration of the user into the flow will be so critical, that a pure decoupled backend will struggle to encompass the full problem. I view the future of LLM application development to be more similar to:

https://sdk.vercel.ai/

Which is essentially a next.js app where SSR is used to communicate with the LLMs/agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.

Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it's hard to imagine the value in some purely asynchronous system. A huge portion of the system/state will need to be synchronously available to the user.

LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.

If I was to guess, Agent systems like this will run on serverless AWS/cloud architectures.

cheesyFish · a year ago
I agree on the importance of letting the user have access to state! Right now there is actually the option for human in the loop. Additionally, I'd love to expand the monitor app a bit more to allow pausing, stepwise, rewind, etc.
cheesyFish commented on Llama-agents: an async-first framework for building production ready agents   github.com/run-llama/llam... · Posted by u/pierre
k__ · a year ago
I have yet to see a production ready agent.
cheesyFish · a year ago
It's definitely tough today, but its just a matter of a) using a smart LLM b) scoping down individual agents to a manageable set of actions

As more LLMs come from companies and open-source, their reasoning abilities are only going to improve imo.

cheesyFish commented on Llama-agents: an async-first framework for building production ready agents   github.com/run-llama/llam... · Posted by u/pierre
ramon156 · a year ago
Ah yes, AAAS
cheesyFish · a year ago
maybe agent micro-services is a better way to frame it ha

u/cheesyFish

KarmaCake day24May 30, 2024View Original