Tool Integration: Web search with Tavily, neural search with Jina, and Python REPL for coding on the fly.
Ease of Use: Docker-ready and a web UI for quick setup and control.
If you’re into AI automation, multi-agent systems, or just love contributing to open-source projects, swing by the GitHub repo: https://github.com/langmanus/langmanus. Dive into the code, play with the demo, or drop some feedback—we’d love to hear from you. Join the community and let’s build something awesome together!
Do you think or worry about not-being able to test these things? (Or is that just me :))
Details: I ack/understand this comes from a dependency (ReAct agents); not directly langmanus.
But, still, curious what the community/hn-tech thinks of testability, veracity, potentially conflicting or overlapping instructions across agents, etc, wrt “prompts” as sources of logic. Ack its a general practice with LLMs.
Ever since mobile & cloud era at their peaks in 2012 or 2014, we’ve had Crypto, AR, VR, and now AI.
I have some pocket change bitcoin, ethereum, played around for 2 minutes on my dust-gathering Oculus & Vision Pro; but man, oh man! Am I hooked to ChatGpt or what!
It’s truly remarkably useful!
You just can’t get this type of thing in one click before.
For example, here’s my latest engineering productivity boosting query: “when using a cfg file on the cmd line what does "@" as a prefix do?”