Readit News logoReadit News
zbowling commented on Package managers need to cool down   nesbitt.io/2026/03/04/pac... · Posted by u/zdw
zbowling · 6 days ago
This is silly. Critical security and bug fixes come out and you are going to wait because you think older must be safer just to avoid a supply chain attack issue? Just secure the supply chain. Be critical about your dependencies and update strategy before updating. If you got some 200 translative dependencies and you don't know everything your build, that is a problem and you probably should look into that because assuming waiting is a solution is not going to stop you from getting hurt with that risk of a surface area.

In the age of AI, I reduced my load on small utility libraries and just have the bigger ones that I'll follow semver and update to manager versions when it make sense and always take small patches but still look at the release notes for what changed.

zbowling commented on Claude Advanced Tool Use   anthropic.com/engineering... · Posted by u/lebovic
rfw300 · 4 months ago
I am extremely excited to use programmatic tool use. This has, to date, been the most frustrating aspect of MCP-style tools for me: if some analysis requires the LLM to first fetch data and then write code to analyze it, the LLM is forced to manually copy a representation of the data into its interpreter.

Programmatic tool use feels like the way it always should have worked, and where agents seem to be going more broadly: acting within sandboxed VMs with a mix of custom code and programmatic interfaces to external services. This is a clear improvement over the LangChain-style Rupe Goldberg machines that we dealt with last year.

zbowling · 4 months ago
I built a MCP server that solves this actually. It works like a tool calling proxy that calls child servers but instead of serving them up as direct tool calls, it exposes them as typescript defintions, asks your LLM to write code to invoke them all together, and then executes that typescript in a restricted VM to do tool calling indirectly. If you have tools that pass data between each other or need some kind of parsing or manipulation of output, like the tool call returns json, it's trivial to transform it. https://github.com/zbowling/mcpcodeserver
zbowling commented on Claude Advanced Tool Use   anthropic.com/engineering... · Posted by u/lebovic
cube2222 · 4 months ago
Nice! Feature #2 here is basically an implementation of the “write code to call tools instead of calling them directly” that was a big topic of conversation recently.

It uses their Python sandbox, is available via API, and exposes the tool calls themselves as normal tool calls to the API client - should be really simple to use!

Batch tool calling has been a game-changer for the AI assistant we've built into our product recently, and this sounds like a further evolution of this, really (primarily, it's about speed; if you can accomplish 2x more tools calls in one turn, it will usually mean your agent is now 2x faster).

zbowling · 4 months ago
I wrote a better version of this idea: https://github.com/zbowling/mcpcodeserver

It works as an MCP proxy of sorts that converts all the child MCP tools into typescript annotations, asks your LLM to generate typescript, then executes those tool calls in a restricted VM to do the tool calls that way. It allows parellel process, passing data between tools without coming back to the LLM for a full loop, etc. The agents are pretty good at debugging issues they create too and trying again.

zbowling commented on Claude Advanced Tool Use   anthropic.com/engineering... · Posted by u/lebovic
jmward01 · 4 months ago
The Programmatic Tool Calling has been an obvious next step for a while. It is clear we are heading towards code as a language for LLMs so defining that language is very important. But I'm not convinced of tool search. Good context engineering leaves the tools you will need so adding a search if you are going to use all of them is just more overhead. What is needed is a more compact tool definition language like, I don't know, every programming language ever in how they define functions. We also need objects (which hopefully Programatic Tool Calling solves or the next version will solve). In the end I want to drop objects into context with exposed methods and it knows the type and what is callable on they type.
zbowling · 4 months ago
I specifically built this as an MCP server. It works like an MCP server that proxies to other MCP servers and converts the tool defintions in to typescript anotations and asks your llm to generate typescript that runs in a restricted VM to make tools calls that way. It's based on the apple white paper on this topic from last year. https://github.com/zbowling/mcpcodeserver
zbowling commented on Tell HN: Azure outage    · Posted by u/tartieret
zbowling · 4 months ago
Alaska Airlines is redircting folks to their slimmed down international site and you can't check in on mobile.
zbowling commented on Improving MCP tool call performance through LLM code generation   github.com/zbowling/mcpco... · Posted by u/zbowling
zbowling · 5 months ago
I hacked together a new MCP server this weekend that can significantly cut down the overhead with direct tool calling with LLMs inside different agents, especially when making multiple tool calls in a more complex workflow. Inspired by the recent blog post by Cloudflare for their CodeMod MCP server and the original Apple white paper, I hacked together a new MCP server that is a lot better than the Cloudflare server in several ways. One of them being not relying on their backends to isolate the execution of the tool calling but also just generally better support around all the features in MCP and also significantly better interface generation and LLM tool hinting to save on context window tokens. This implementation can also scale to a lot more child servers more cleanly.

Most LLMs are naturally better at code generation than they are at tool calling with code understanding being more foundational to their knowledge and tool calling being pound into models in later stages during fine tuning. It can also burn an excessive number of tokens passing data between tools via LLMs in these agent orchestrators. But if you move the tool calling to be done by code rather than directly by the LLMs in the agents and have the LLMs generate that code, it can produce significantly better results for complex cases and reduce overhead with passing data between tool calls.

This implementation works as an MCP server proxy basically. As an MCP server, it is also an MCP client to your child servers. In the middle it hosts a node VM to execute code generated by the LLM to make tool calls indirectly. By introspecting the child MCP servers and converting their tool call interfaces to small condensed typescript API declarations, your LLM can generate code that invokes these tools in the provided node VM instead of invoking directly and do the complex processing of the response handling and errors in code instead of directly. This can be really powerful with when doing multiple tool calls in parallel or with logic around processing. And since it's a node VM, it has access to standard node models and built in standard libraries there.

One issue is if your tool calls are actually simple, like doing a basic web search or a single tool call, this can a bit more unnecessary overhead. But the more complex the prompt, the more this approach can significantly improve the quality of the output and lower your inference billing costs.

zbowling commented on All-New Next Gen of UniFi Storage   blog.ui.com/article/all-n... · Posted by u/ycombinete
InTheArena · 5 months ago
Just a few words of caution - this doesn't directly compete with synoplogy. It's literally just a NAS box. That said, it's a NAS box at a awesome price / performance / capability point _if_ and only _if_ you are already in the Unifi namespace.

I would say you are almost always better buying this + a mini-pc then a synology at this point, or a Ugreen NAS + TrueNAS if you want to do amost everything a synology can do.

zbowling · 5 months ago
synology doens't even compete with synology anymore because all the new hardware requires locked in synology drives now.

It's creating a void that is getting filled with Ugreen, Minisforum, beelink, Aoostor for invoative platforms from China and classic competitors like Qnap, Asustor, Teramaster, etc for innovation for the small to mid-tier needs. 45drives in the larger spaces for folks wanting to manage things more on their own but have enterprise scale needs. Dell and HP have always competed on the high-end enterprise space and also becoming a better option, even though synology is so easy as an appliance.

zbowling commented on How I solved PyTorch's cross-platform nightmare   svana.name/2025/09/how-i-... · Posted by u/msvana
zbowling · 6 months ago
Check out Pixi! Pixi is an alternative to the common conda and pypi frontends and has better system for hardware feature detection and get the best version of Torch for your hardware that is compatible across your packages (except for AMD at the moment). It can pull in the condaforge or pypi builds of pytorch and help you manage things automagically across platforms. https://pixi.sh/latest/python/pytorch/

It doesn't solve how you package your wheels specifically, that problem is still pushed on your downstream users because of boneheaded packaging decisions by PyTorch themselves but as the consumer, Pixi soften's blow. The condaforge builds of PyTorch also are a bit more sane.

u/zbowling

KarmaCake day2353September 22, 2010
About
Engineer at Meta. Formerly Modular AI (modular.com), Google (Fuchsia), and Apportable (YC W2011, acquired by Google).
View Original