These types of frameworks will become abundant. I personally feel that the integration of the user into the flow will be so critical, that a pure decoupled backend will struggle to encompass the full problem. I view the future of LLM application development to be more similar to:
Which is essentially a next.js app where SSR is used to communicate with the LLMs/agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.
Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it's hard to imagine the value in some purely asynchronous system. A huge portion of the system/state will need to be synchronously available to the user.
LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.
If I was to guess, Agent systems like this will run on serverless AWS/cloud architectures.
Hard agree. The user being part of the flow is still very much needed. And I have also had a great experience using Vercel's AI SDK on next.js to build an LLM based application
I agree on the importance of letting the user have access to state! Right now there is actually the option for human in the loop. Additionally, I'd love to expand the monitor app a bit more to allow pausing, stepwise, rewind, etc.
Hey guys, Logan here! I've been busy building this for the past three weeks with the llama-index team. While it's still early days, I really think the agents-as-a-service vision is something worth building for.
We have a solid set of things to improve, and now is the best time to contribute and shape the project.
I must be missing something: isn’t this just describing a queue? The fact that the workload is a LLM seems irrelevant, it’s just async processing of jobs?
It being a queue is one part of it yes. But the key is trying to provide tight integrations and take advantage of agentic features. Stuff like the orchestrator, having an external service to execute tools, etc.
https://sdk.vercel.ai/
Which is essentially a next.js app where SSR is used to communicate with the LLMs/agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.
Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it's hard to imagine the value in some purely asynchronous system. A huge portion of the system/state will need to be synchronously available to the user.
LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.
If I was to guess, Agent systems like this will run on serverless AWS/cloud architectures.
We have a solid set of things to improve, and now is the best time to contribute and shape the project.
Feel free to ask me anything!
https://engageusers.ai/ecosystem.pdf
We’re building this — do you think it’s worthwhile, and what advice would you give?
RIP in peace, VC money
https://chatgpt.com/share/f287f9aa-d5c8-4866-a5f0-65499079d5...
Deleted Comment
As more LLMs come from companies and open-source, their reasoning abilities are only going to improve imo.
Right now the products I see are just junior level software with an LLM behind.
That sounds like a verb, to production, that must mean... something right?
Deleted Comment
Deleted Comment