Readit News logoReadit News
zekejohn commented on Ask HN: Is it worth learning Vim in 2026?    · Posted by u/zekejohn
zekejohn · 22 days ago
zekejohn commented on Ask HN: Is it worth learning Vim in 2026?    · Posted by u/zekejohn
colesantiago · 22 days ago
What happens if?

1. The LLMs are down, and you're on call and you need to fix a bug immediately (no mistakes)

2. You're working over serial (The LLMs aren't there to help you and only vi and emacs are available)

3. You're working on an old computer for some esoteric reason.

4. You're going in an interview and they (temporarily) forbid you to use an LLM to check your knowledge on using these tools (as well as programming tests)

If you cannot use these editors without an LLM, (Vim has navigation keys 'hjkl', G/g and so forth which many such tools have adopted), then it isn't a good look.

You don't have to 100% master them but knowledge of them will help when the LLMs have an outage, and there WILL be outages.

Also be careful not to keep relying on these LLMs too much otherwise your programming skills will atrophy. [1]

So the answer is YES, learn Vim, not to boost your ego, but make it a muscle memory so your skills won't atrophy.

[1] https://www.infoworld.com/article/4125231/ai-use-may-speed-c...

zekejohn · 22 days ago
ya i do definitely agree that learning Vim is gonna help my overall understanding for how things work at a deeper level and also fight back a lot of the “learned helplessness” that i did develop when coding w/ AI to your point also another thing that i was thinking is that yes short term (maybe the first few months?) i wouldn’t see any benefit… but it would definitely help in the long term and that my coding ability is not just directly tied to whatever the latest model is capable of
zekejohn commented on Show HN: An Open-source React UI library for ASCII animations   github.com/zeke-john/rune... · Posted by u/zekejohn
lewisnewman · 25 days ago
This is awesome! I remember seeing this on the AI Tinkerer's discord server, planning to use it in my next project so everything is text based
zekejohn · 25 days ago
thank you! lmk if you need any help getting it setup :)
zekejohn commented on Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete   huggingface.co/sweepai/sw... · Posted by u/williamzeng0
zekejohn · a month ago
Nice, could this be used to auto complete terminal/cli commands?
zekejohn commented on ChatGPT Containers can now run bash, pip/npm install packages and download files   simonwillison.net/2026/Ja... · Posted by u/simonw
tgq2915 · 2 months ago
[flagged]
zekejohn · a month ago
if its in a secured and completely isolated sandbox that gets destroyed at the end of the request, then how could it he “insecure”
zekejohn commented on Show HN: An LLM response cache that's aware of dynamic data   blog.butter.dev/on-automa... · Posted by u/raymondtana
zekejohn · 2 months ago
Interesting! Definitely gonna give it a shot
zekejohn commented on Making Tool Calling 75% More Efficient via Code   github.com/zeke-john/code... · Posted by u/zekejohn
l1am0 · 2 months ago
This is basically what you learn in the huggingface smolagents course (months ago)...

They call it CodeAct

https://huggingface.co/learn/agents-course/en/unit2/smolagen...

zekejohn · 2 months ago
Interesting! First time im seeing this course, thanks for the link. From a high level it’s definitely in the same code first agents family then. After reading about smolagents for a bit i think the main things Codecall adds are TypeScript + generated SDKs, progressive tool discovery (readFile + executeCode instead of exposing every tool directly), and the single script sandboxed execution first flow w/ learned constraints, rather than the more of the "multi‑step ReAct loop" that smolagents prioritizes (like in the link below), which is a bit more like traditional tool calling w/ code ->

https://huggingface.co/blog/smolagents

zekejohn commented on Making Tool Calling 75% More Efficient via Code   github.com/zeke-john/code... · Posted by u/zekejohn
zekejohn · 2 months ago
Traditional AI agents have EVERY tool loaded into context from the stat, call tools one at a time, each requiring a full inference round trip, for example: "delete all completed tasks," that means: call findTasks, wait, call deleteTask for task 1, wait, call for task 2... each call resends the entire conversation history, so tokens compound fast and there is a lot of wasted tokens and inference.

Codecall is an open source approach that lets agents write and execute TypeScript code in a secured Deno sandbox to orchestrate multiple tools programmatically, like calling an API (which is really all a tool is!)

So instead of 20+ inference passes and 90k+ tokens, the agent can just write and execute:

const tasks = await tools.todoist.findTasks({ completed: true }); for (const task of tasks) { await tools.todoist.deleteTask({ id: task.id }); }

2 inference passes. The code runs in a Deno sandbox, executes all operations programmatically, and returns a result. In our demo, for one example, this reduced tokens by 74.7% and tool calls by 92.3% while being much faster as well.

How it works (high level) ->

1. There are only 2 tools (readFile, executeCode) + a file tree. The agent reads SDK files on demand, so a 30 tool setup is effectively the to a 5 tool setup (only the file tree gets bigger)

2. Multiple tool calls happen in one execution, not N inference calls for N operations... because the agent can write code to execute and orchestrate multiple tools (like API) this significantly reduces the number of passes + tokens per request

3. Models have a 10-50% failure rate searching through large datasets in context. Code like users.filter(u => u.role === "admin") is deterministic and avoids those failure, so not only is it more efficient & cheaper. its also often much more accurate when doing operations with lots of data!

We also generate TypeScript SDK files from MCP tool definitions, so the agent sees clean types and function signatures. It also learns from errors, so when a tool call fails, it updates the SDK file with learned constraints so future agents avoid the same mistake.

Codecall works with any MCP server (stdio/http). Would love feedback from anyone interested in or building more complex agents :)

u/zekejohn

KarmaCake day9December 31, 2025
About
@zekejawn on twitter :)
View Original