Readit News logoReadit News
acro-v commented on Ask HN: What Are You Working On? (December 2025)    · Posted by u/david927
acro-v · a day ago
I’ve been working on a terminal-native AI coding tool called Aye Chat.

The idea is to remove the copy/paste/review loop entirely. Instead of asking an AI for code and then manually approving and applying it, the tool writes directly to files in your folder and automatically snapshots everything so you can diff or instantly undo if it gets something wrong.

It lives entirely in the terminal, so you can prompt the AI, run tests, open vim, refactor, restore changes, all in one flow. The bet is that with current models, the main bottleneck is the human, not LLM.

It’s open source and still very early, but we already have a steady cohort of users - as the flow is sticky after the "aha" moment. Repo is here if anyone’s curious, give it a star if you like the idea: https://github.com/acrotron/aye-chat

Happy to answer questions or hear skepticism :)

acro-v commented on Ask HN: How Do you undo or checkout changes from Codex CLI and others?    · Posted by u/elpakal
acro-v · 22 days ago
I actually had the same pain point - so to address - built a code generator specifically around the "undo" functionality: it takes snapshots before making any updates, and then you can "restore" right from a tool with a single command: no need for manual git invocation. Literally, just a single "restore" command:

https://github.com/acrotron/aye-chat

acro-v commented on Can application layer improve local model output quality?    · Posted by u/acro-v
acro-v · 23 days ago
Someone pointed me to this post from Cline engineer - below is my response to that

Post: https://cline.bot/blog/why-cline-doesnt-index-your-codebase-...

That post however does not apply to offline processing use case. Here are his 3 main problem points they re trying to solve:

Code Doesn't Think in Chunks

But then he is describing follow semantic links through imports, etc. -> that technique is still hierarchical chunking, and I am planning to implement that as well: it's straightforward.

2. Indexes Decay While Code Evolves

This is just not true - there are multiple ways to solve it. One, for example, is continuous indexing at low priority in the background. Another one - monitoring for file changes and reindexing only differences, etc. I already implemented first iteration for this: index remains current.

3. Security Becomes a Liability (and then goes into embeddings to be stored somewhere)

We are talking about offline mode of operation. Not with Aye Chat: it implements embedding store locally - with ChromaDB and ONNXMiniLM_L6_V2 model.

So as you can see - none of his premises apply here.

And then as part of solution he claims that "context window does not matter because Claude and ChatGPT models are now into 1M context window" - but once again that does not apply to locally hosted models: I am getting 32K context with Qwen 2.5 Coder 7B on my non-optimized setup with 8Gb VRAM.

The main thing why I think it may work is the following: answering a question includes "planning for what to do", and then "doing it". Models are good at "doing it" if they are given all necessary info, so if we unload that "planning" into application itself - I think it may work.

acro-v commented on Optimistic UI for AI coding: writing to disk with snapshot undo   blog.ayechat.ai/blog/2025... · Posted by u/acro-v
acro-v · a month ago
(OP here) I decided to let the AI coder write directly to the file system without human approval - to accelerate the process, but built a Python-based snapshot mechanism to make it safe to be able to revert the changes. Happy to answer any questions about diff/restore implementation.
acro-v commented on Prompt generation vs. Context generation [video]   youtube.com/watch?v=IS_y4... · Posted by u/acro-v
acro-v · a month ago
Just another day I was asking a question how long will it be until we stop looking at AI-generated code - and most replies were in the 5+ years out. I think most - me included - apparently still think in terms of prompt generation and getting that single prompt to LLM to be perfect to do the job.

In the meantime, apparently, with the rise of the agents, that future is already here, and there are some who already harnessed such power. See the video: the author explains their process, and claims that he had not looked into the code for 6+ weeks and only works with the MD specs now.

When preparing recipes for agents - they build entire workflow around managing context, and make them 3-fold: "research, plan, implement", and then he goes into details how they avoid slop and keep mental alignment for developers to keep up with the changes.

Compared to that - I now realize that what I originally built (an AI helper for a terminal: https://github.com/acrotron/aye-chat) is on the "naive" side: prompt until you either get it right from AI or give up and try from the beginning - and with this context generation approach explained I think I will start moving to the agent-based implementation while keeping control plane still in the terminal: current implementation works on smaller code bases but with this approach should be able to cover larger ones as well.

With these techs developing so fast - I think it's just a matter of keeping up with the news - and be aware of what's being done successfully, and unfortunately that's not always easy to do. This one specifically - to let go of code reviews and learning how to work with spec files only - will require mentality change, and it will be a psychological barrier to overcome.

acro-v commented on Show HN: Aye Chat: AI-First development in your terminal   github.com/acrotron/aye-c... · Posted by u/acro-v
acro-v · 2 months ago
(Aye Chat developer here). We’re looking for feedback on usability, feature set, and, of course, bugs. Join the conversation on *Hacker News*, on our GitHub issues page, and on our Discord server. All links are in the README. Thank you!

u/acro-v

KarmaCake day1September 28, 2025View Original