Readit News logoReadit News
_QrE · 6 months ago
I'm not sure how valid most of these points are. A lot of the latency in an agentic system is going to be the calls to the LLM(s).

From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):

    They are long-running — anywhere from seconds to minutes to hours.
    Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
    They often involve input from a user (or another agent!) at some point in their execution cycle.
    They spend a lot of time awaiting i/o or a human.
"""

No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).

I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.

> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...

Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.

tptacek · 6 months ago
That's true from a performance perspective but, in building an agent in Go, I was thankful that I had extremely well-worn patterns to manage concurrency, backlogs, and backpressure given that most interactions will involve one or more transactions with a remote service that takes several seconds to respond.

(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)

ramesh31 · 6 months ago
Agents are the orchestration layer, i.e. a perfect fit for Go (or Erlang, or Node). You don't need a "mountain of AI-related libraries" for them, particularly given the fact that what we call an agent now has only existed for less than 2 years. Anything doing serious IO should be abstracted behind a tool interface that can (and should) be implemented in whatever domain specific tooling is required.
serjester · 6 months ago
I wouldn’t underestimate the impact of having massive communities around a language. Basically any problem you have has likely already been solved by 10 other people. With AI being as frothy as it is, that’s incredibly valuable.

Take for example something like being able to easily swap models, in Python it’s trivial with litellm. In niche languages you’re lucky to even have an official, well mantained SDK.

philwelch · 6 months ago
Go still has a much better concurrency story. It’s also much less of a headache to deploy since all you need to deploy is a static binary and not a whole bespoke Python runtime with every pip dependency.
TypingOutBugs · 6 months ago
Go is definitely better, but with uv you can install all dependencies including python with only curl
pantsforbirds · 6 months ago
I've been messing around with an Elixir + BEAM based agent framework. I think a mixture of BEAM + SQLite is about as good as you can get for agents right now.

You can safely swap out agents without redeploying the application, the concurrency is way below the scale BEAM was built for, and creating stateful or ephemeral agents is incredibly easy.

My plan is to set up a base agent in Python, Typescript, and Rust using MCP servers to allow users to write more complex agents in their preferred programming language too.

nilslice · 6 months ago
you should check out the Extism[0] project and the Elixir SDK[1]. This would allow you to write the core services, routing, message passing, etc in Elixir, and leverage all the BEAM/OTP have to offer, and then embed "agents" written in other languages which are small Wasm modules that act like in-process plugins.

[0]: https://github.com/extism/extism [1]: https://github.com/extism/elixir-sdk

pantsforbirds · 6 months ago
That's a really interesting idea. My original thought was to use MCP as the way to define other agents, but I'll have to do some more research into extism!
alberth · 6 months ago
Any reason for SQLite use, instead of the BEAMs built-in mnesia data store?

https://www.erlang.org/doc/apps/mnesia/mnesia.html

pantsforbirds · 6 months ago
I'm still in the exploration/experimentation stage of the project, but I'm currently using a mixture of SQLite, PostgreSQL, S3, and DuckDB.

My original thought was to spin up SQLite databases as needed because they are super lightweight, well-tested, and supported by almost every programming language. If you want to set up an agent in another programming language via MCP, but you still want to be able to access the agent memory directly, you can use the same schema in a SQLite database.

I may end up using mnesia for more metadata or system-oriented data storage though. It's very well designed imo.

But one of the biggest reasons has just been the really nice integration with DuckDB. I can query all of the SQLite databases persisted in a directory and aggregate some metadata really easily.

lunarcave · 6 months ago
Agents easily spend >90% of their time waiting for LLMs to reply and optionally executing API calls in other services (HTTP APIs and DBs).

In my experience the performance of the language runtime rarely matters.

If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.

fixprix · 6 months ago
Yep exactly, might as well use a language that works with JSON natively like TypeScript; which has arguably far more powerful type system than Go.
zveyaeyv3sfye · 6 months ago
> like TypeScript; which has arguably far more powerful type system than Go.

"arguably".

Typescript is just a thin wrapper over javascript who doesnt have these types at all.

fritzo · 6 months ago
In my experience, the 2nd most costly function in agents (after LLM calls) is diffing/patching/merging asynchronous edits to resolve conflicts. Those conflict resolution operations can call out to low-level libraries, but they are still quite expensive optimization problems, compared to serialization etc.
energy123 · 6 months ago
What diffing/patching/merging library are you working with? Or are you building your own?
autogn0me · 6 months ago
can you be more specific about this?
jeswin · 6 months ago
Go has few advantages for this kind of workload - most of the time it'll just be waiting on io. And you suffer from the language itself; many type system features that you get for free in modern langauges require workarounds in Go.

I've found that TypeScript is an excellent glue language for all kinds of AI. Python, followed by TS enjoy broad library support from vendors. I personally prefer it over Python because the type system is much more expressive and mature. Python is rapidly improving though.

> It turns out, cancelling long-running work in Node.js and Python is incredibly difficult for multiple reasons:

Evidence is lacking for this claim. Almost all tools out there support cancellations, and they're mostly either Python or JS.

pjmlp · 6 months ago
Plus if one really needs more performance than V8 can delivery, I rather write a native module in C++/Rust than reach out to Go.
huqedato · 6 months ago
Following the article's logic, Elixir is a better fit for agents. Ideal I would say.
skybrian · 6 months ago
For long-running, expensive processes that do a lot of waiting, a downside is that if you kill the process running the goroutine, you lose all your work. It might be better to serialize state to a database while waiting? But this adds a lot of complexity and I don’t know any languages that make it easy to write this sort of checkpoint-based state machine.
abelanger · 6 months ago
OP here - this type of "checkpoint-based state machine" is exactly what platforms which offer durable execution primitives like Hatchet (https://hatchet.run/) and Temporal (https://temporal.io/) are offering. Disclaimer: am a founder of Hatchet.

These platforms store an event history of the functions which have run as part of the same workflow, and automatically replay those when your function gets interrupted.

I imagine synchronizing memory contents at the language level would be much more overhead than synchronizing at the output level.

tptacek · 6 months ago
This is also how our orchestrator (written in Go) is structured. JP describes it pretty well here (it's a durable log implemented with BoltDB).

https://fly.io/blog/the-exit-interview-jp/

skybrian · 6 months ago
Yep, though I haven’t used them, I’m vaguely aware that such things exist. I think they have a long way to go to become mainstream, though? Typical Go code isn’t written to be replayable like that.
lifty · 6 months ago
What are the main differences between temporal and hatchet?
sorentwo · 6 months ago
That's the issue with goroutines, threads, or any long running chain of processes. The tasks must be broken up into atomic chunks, and the state has to be serialized in some way. That allows failures to be retried, errors to be examined, results to be referenced later, and the whole thing to be distributed between multiple nodes.

It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.

It's Elixir specific, but this article emphasizes the importance of async task persistence: https://oban.pro/articles/oban-starts-where-tasks-end

carsoon · 6 months ago
I actually working on an agent library in golang and this is exactly the thought process I've come up with. If we have comprehensive logging we can actual reconstruct the agents state at any position. Allowing for replays etc. You just need the timestamp(endpoint) and the parent run and you can build children/branched runs after that.

Through the use of both a map that holds a context tree and a database we can purge old sessions and then reconstruct them from the database when needed (for instance an async agent session with user input required).

We also don't have to hold individual objects for the agents/workflows/tools we just make them stateless in a map and can refernce the pointers through an id as needed. Then we have a stateful object that holds the previous actions/steps/"context".

To make sure the agents/workflows are consistent we can hash the output agent/workflow (as these are serializable in my system)

I have only implemented basic Agent/tools though and the logging/reconstruction/cancellation logic has not actually been done yet.

jpk · 6 months ago
Just a drive-by thought, but: What you're describing sounds a lot like Temporal.io. I guess the difference is the "workflow" of an agent might take different paths depending on what it was asked to accomplish and the approach it ends up taking to get there, and that's what you're interested in persisting, replaying, etc. Whereas a Temporal workflow is typically a more rigid thing, akin to writing a state machine that models a business process -- but all the challenges around persistence, replay, etc, sound similar.

Edit: Heh, I noticed after writing this that some sibling comments also mention Temporal.

Karrot_Kream · 6 months ago
Temporal is pretty decent at checkpointing long-running processes and is language agnostic.
trevinhofmann · 6 months ago
I've been considering good ways to use a task queue for this, and might just settle for a rudimentary one in a Postgres table.

The upside is that agent subtasks can be load balanced among servers, tasks won't be dropped if the process is killed, and better observability comes along with it.

The downside is definitely complexity. I'm having a hard time planning out an architecture that doesn't significantly increase the complexity of my agent code.

ashishb · 6 months ago
> For long-running, expensive processes that do a lot of waiting, a downside is that if you kill the goroutine, you lose all your work.

This is true regardless of the language. I always do a reasonable amount of work (milliseconds to up to a few seconds) worth of work in a Go routine every time. Anything more and your web service is not as stateless as it should be.

odyssey7 · 6 months ago
AI engineers will literally invent a new universe before they touch JavaScript.

The death knell for variety in AI languages was when Google rug-pulled TensorFlow for Swift.

dpe82 · 6 months ago
Avoiding JavaScript like the plague that it is, is not unique to AI engineers.

-Someone who has written a ton of JS over the past... almost 30 years now.

dpkirchner · 6 months ago
Choosing Python over JavaScript is one of the more perplexing decisions I've seen.
rednafi · 6 months ago
This is the way.

JS is a terrible language to begin with, and bringing it to the backend was a mistake. TS doesn’t change the fact that the underlying language is still a pile of crap.

So, like many, I’ll write anything—Go, Rust, Python, Ruby, Elixir, F#—before touching JS or TS with a ten-foot pole.

trevinhofmann · 6 months ago
This doesn't contribute much to the discussion.

Use whatever language works well for you and the task at hand, but many enjoy fullstack JS/TS.

mkfs · 6 months ago
> Python, Ruby

It's 2025, Node.js has been around since 2009, yet these languages' still use C-based interpreters by default, and their non-standard JIT alternatives are still much worse than V8.

koakuma-chan · 6 months ago
You think Python is a better language than TS?
kweingar · 6 months ago
Why is JS particularly good for agents?
tinrab · 6 months ago
I'd say TypeScript is currently the best choice for agents. For one, MCP tooling is really solid, the language itself is easy, fast to develop in, and not esoteric.
odyssey7 · 6 months ago
The same reason it’s good for web servers. It excels at even-driven applications.
EGreg · 6 months ago
Because it integrates great with browsers and people know the language already for node.js and the packages in npm can work for both?
danenania · 6 months ago
I built Plandex[1] (open source CLI coding agent focused on large projects and tasks) in Go and I’ve been very happy with that decision.

Beneath all the jargon, it’s good to remember that an “agent” is ultimately just a bunch of http requests and streams that need to be coordinated—some serially and some concurrently. And while that sounds pretty simple at a high level, there are many subtle details to pay attention to if you want to make this kind of system robust and scalable. Timeouts, retries, cancellation, error handling, thread pools, thread safety, and so on.

This stuff is Go’s bread and butter. It’s exactly what it was designed for. It’s not going to get you an MVP quite as fast as node or python, but as the codebase grows and edge cases accumulate, the advantages of Go become more and more noticeable.

1 - https://github.com/plandex-ai/plandex