Readit News logoReadit News
sirwhinesalot · 3 months ago
Before I read the article I thought this meant programming with "async".

Just call it Agent-based programming or somesuch, otherwise it's really confusing!

ankrgyl · 3 months ago
(Author here) Haha that is a great point. I was trying to come up with a term that described my personal workflow and specifically felt different than vibe coding (because it's geared towards how professional programmers can use agents). Very open to alternative terms!
brothrock · 3 months ago
This type of coding has been extremely helpful to me in the past few weeks. I’m on parental leave, but also a co-owner of a small company and can’t completely log off.

I can one handed spec out changes, AI does its thing, and then I review and refine it whenever my kid is asleep for 20 minutes. Or if I’m super tired I’m able to explain changes with horrible english and get results. At the same time, I am following a source control and code review process that I’ve used in large teams. I’ve even been leaving comments on PRs where AI contributes and I’m the only dev in the codebase.

I wouldn’t call this vibe coding— however vibe coding could be a subset of this type of work. I think async coding is a good description, but bad because of what it means as a software concept (which is mentioned). Maybe AI-delegation?

theknarf · 3 months ago
There is already a term for it! It's called "Ralph coding": https://ghuntley.com/ralph/
didibus · 3 months ago
I want to understand the distinction you're making against vibe coding.

In vibe coding, the developer specifies only functional requirements (what the software must do) and non-functional requirements (the qualities it must have, like performance, scalability, or security). The AI delivers a complete implementation, and the developer reviews it solely against those behaviors and qualities. Any corrections are given again only in terms of requirements, never code, and the cycle repeats until the software aligns.

But you're trying to coin a term for the following?

In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.

Did I understand it right?

If so, I've most seen the latter be called AI pair-programming or AI-assisted coding. And I'd agree with the other commenters, please DO NOT call it async programming (even if you add async AI it's too confusing).

SCUSKU · 3 months ago
Same! I was hoping this would have some insights into pitfalls or the like with javascript promises or python async, but alas no such luck.
drob518 · 3 months ago
Exactly. I think the traditional meaning of “asynchronous programming” was coined first. So, let’s stick with that.
grandiego · 3 months ago
Same here. I've read the author's braintrust.dev as "brain - Rust - Dev", so I was expecting a discussion on Rust Async development.
datadrivenangel · 3 months ago
I did this early in my career as a product owner with an offshore team in India... Write feedback/specs, send them over at end of day US time. Have a fresh build ready for review by start of business.

Worked amazingly when it worked. Really stretched things out when the devs misunderstood us or got confused by our lack of clarity and we had to find time for a call... Also eventually there got to be some gnarly technical debt and things really slowed down.

mcny · 3 months ago
I think it can only work if the product owner literally owns the product as in has FULL decision making power about what goes or doesn't go etc. it doesn't work when a product manager is a glorified in between guy, dictating the wishes of the CEO through a game of telephone from the management.
jt2190 · 3 months ago
You’ll have to be more specific about what you mean by “product owner”, because that’s a very nebulous job title. For example, how technical is this product owner? Are they assumed to “just know” that they’re asking for an overly complex, expensive technical solution?
ch4s3 · 3 months ago
> it can only work if the product owner literally owns the product as in has FULL decision making power

This seems like a fairly rare situation in my experience.

datadrivenangel · 3 months ago
Agreed. A glorified go between person is rarely going to succeed at delivering something good.
jmull · 3 months ago
This vision of AI programming is DOA.

The first step is "define the problem clearly".

This would be incredibly useful for software development, period. A 10x factor, all by itself. Yet it happens infrequently, or, at best, in significantly limited ways.

The main problem, I think, is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.

I guess maybe the context is cranking out REST endpoints or some other constrained detail of a larger thing. Then, sure.

thefourthchime · 3 months ago
I disagree with being detailed, many times I want to AI to think of things, half the time it comes up with something I wouldn't have that I like.

The thing I would add is to retry to prompt, don't tell it to fix a mistake. Rewind and change the prompt to tell It not to do that it did.

athrowaway3z · 3 months ago
I agree there is a lot of value to have it do, what it considers, the obvious thing.

It is almost by definition what the average programmer would expect to find, so it's valuable as such.

But the moment you want to do something original, you need to keep high-level high-quality documentation somewhere.

dec0dedab0de · 3 months ago
Figuring out what you want is the hard part about programming. I think that's where AI augmentation will really shine, because it lowers the time between iterations and experiments.

That said, this article is basically describing being a product owner.

ankrgyl · 3 months ago
(Author here) I can certainly appreciate having an alternate perspective, but I think it's unfair to say it's DOA. I've personally used this workflow for the last 6 months and shipped a lot of features into our product, including the lowest levels of infra all the way to UI code. I definitely think there is a lot to improve. But it works, at least for me :)
Graphon1 · 3 months ago
> is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.

My experience is different. I find that AI-powered coding agents drop the barriers to experimentation drastically, so that ... yes if I don't know what I Want, I can go try things very easily, and learn. Exploration just got soooo much cheaper. Now that may be a different interaction that what is described in this blog post. The exploration may be a precursor to what is happening in this blog post. But once I'm done exploring I can define the problem and ask for solutions.

If it's DOA you'd better tell everyone who is currently doing this, that they're not really doing this.

lelanthran · 3 months ago
This works until you get to the point that your actual programming skills atrophy due to lack of use.

Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.

sevensor · 3 months ago
What the article describes is:

1. Learn how to describe what you want in an unambiguous dialect of natural language.

2. Submit it to a program that takes a long time to transform that input into a computer language.

3. Review the output for errors.

Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.

ako · 3 months ago
No, it sounds like the work of a product manager, you’re just working with agents rather than with developers.
MisterTea · 3 months ago
Coding interview of the future: "Show us how you would prompt this binary sort."
brothrock · 3 months ago
I think this is better than many current coding interview methods. Assuming you have an agent setup to not give the interviewee the answer directly.

Of course there are times when you need someone extremely skilled at a particular language. But from my experience I would MUCH prefer to see how someone builds out a problem in natural language and have guarantees to its success. I’ve been in too many interviews where candidates trip over syntax, pick the wrong language, or are just not good at memorization and don’t want to look dumb looking things up. I usually prefer paired programming interviews where I cater my assistance to expectations of the position. AI can essentially do that for us.

joenot443 · 3 months ago
My understanding is it's already here [1]

[1] https://news.ycombinator.com/item?id=44723289

Graphon1 · 3 months ago
not a joke.

Also, the future you are referring to is... like... 6 weeks from now.

lenerdenator · 3 months ago
Agreed.

> Hand it off. Delegate the implementation to an AI agent, a teammate, or even your future self with comprehensive notes.

The AI agent just feels like a way to create tech debt on a massive scale while not being able to identify it as tech debt.

CuriouslyC · 3 months ago
I have a static analysis and refactoring tool that does wonders to identify duplication and poor architecture patterns and provide a roadmap for agents to fix the issues. It's like magic, just point it at your codebase then tell the agent to grind away at the output (making sure to come up for air and rerun tests regularly) and it'll go for hours.
segfaultex · 3 months ago
This is what a lot of business leaders miss.

The benefits you might gain from LLMs is that you are able to discern good output from bad.

Once that's lost, the output of these tools becomes a complete gamble.

CuriouslyC · 3 months ago
You're right, reviews aren't the way forward. We don't do code reviews on compiler output (unless you're writing a compiler). The way forward is strong static and analytic guardrails and stochastic error correction (multiple solutions proposed with LLM as a judge before implementation, multiple code review agents with different personas that have been prompted to be strict/adversarial but not nit-pick) with robust test suites that have also been through multiple passes of audits and red-teaming by agents. You should rarely have to look at the code, it should be a significant escalation event like when you need to coordinate with Apple due to XCode bugs.
JackSlateur · 3 months ago
Static and analytic guardrails ??

Unless you are writing some shitty code for a random product that will be used for some demo then trashed, code can be resumed to a simple thing:

  Code is a way to move ideas into the real world through a keyboard
So, reading that the future is using a random machine with an averaged output (by design), but that this output of average quality will be good enough because the same random machine will generate tests of the same quality : this is ridiculous

Tests are probably the thing you should never build randomly, you should put a lot of thoughts in them: do they make sense ? Do your code make sense ? With tests, you are forced to use your own code, sometimes as your users will

Writing tests is a good way to force yourself to be empathic with your users

People that are coding through IA are the equivalent of the pre-2015 area system administrators that renewed TLS certificates manually. They are people that can (and are replacing themselves) with bash scripts. I don't miss them and I won't miss this new kind.

lelanthran · 3 months ago
> You should rarely have to look at the code, it should be a significant escalation event

This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.

dingnuts · 3 months ago
good fucking luck writing adequate test suites for qualitative business logic

if it's even possible it will be more work than writing the code manually

gobdovan · 3 months ago
For generative skills I agree, but for me the real change is in how I read and debug code. After reading so much AI-generated code with subtle mistakes, I can spot errors much quicker even in human-written code. And when I can't, that usually means the code needs a refactor.

I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.

But yeah, default to the better exercise and just code yourself, at least on the project's core.

suddenlybananas · 3 months ago
What do you mean you can spot errors much quicker?
ankrgyl · 3 months ago
(Author here) Personally, I try to combat this by synchronously working on 1 task and asynchronously working on others. I am not sure it's perfect, but it definitely helps me avoid atrophy.
iman453 · 3 months ago
By synchronously working on 1 do you mean coding it with minimal AI?

Nice article by the way. I've found my workflow to be pretty much exactly the same using Claude code.

NooneAtAll3 · 3 months ago
so... normal team lead -> manager pipeline?
ge96 · 3 months ago
Idk if I'm a luddite or what

I actually like writing code, it does get tedious I get that when you're making yet another component. I don't feel joy when you just will a bunch of code into existence with words. It's like actively participating in development when typing. Which yeah people use libraries/frameworks/boilerplate.

My dream is to not be employed in software and do it for fun (or work on something I actually care about)

Even if I wrote some piece of crap, it is my piece of crap

krapp · 3 months ago
Some people will probably call you a luddite, but don't listen to them. There's nothing wrong with taking joy in the craft, with learning and exploring and creating. That's what hacker culture used to be about.

Unfortunately, you won't be able to get a job in software with anything but AI skills, since humans no longer write software in the industry. People will look at you the way they used to look at anyone who wrote their own HTML or Javascript without frameworks and Typescript, like you must drive your car to work with your feet.

ge96 · 3 months ago
It was funny I was handed this project to work on and I skimmed the ReadMe, there were a lot of readmes in the code like how to use pipenv or whatever basic stuff... at first I was like "nice job with the docs" but then I later realized it was a vibe coded project I felt like I was owned. It's funny

Also funny how much time was wasted since it had random code in it that was not removed (not working old code vs. current working new code) that's not to blame on the AI part but yeah.

I have a job now in the industry it's funny I work with AI eg. AWS Bedrock/Knowledgebases/Agents... RAG/LLM AI.

The AI I want to work with is vision/ML (robotics) but don't have the background for that (I do it as a hobby instead).

I'm feeling the effect of vibe coding now, where the 2nd leader in our team was only recently a developer but uses ChatGPT/windsurf to code for him which enables him to work on random topics like OpenSearch one day Airflow the next... idk I get I'm the one being left behind by not doing it too but I also want to really learn/understand something. You can do that with an AI-assisted thing but yeah... idk I don't want to that's what I'm saying, I will get out eventually once I've saved enough money.

My learning process for a while has been watching YT crash courses/reading the docs/finding articles...

The project I mentioned above there was literally a prompt in the repo "Write me an event-driven app with this architecture..."

The 2nd leader I mentioned above is a code at work/not at home type of person which is fine but yeah. I'm not that person, I like to actually code/make stuff outside of work. It's not just about getting a task done/shipping some code for me. But I guess that's what a business is, churn out something.

Idk there's some validity there isn't there... "I've been a developer for 10 years, then a guy with 2 years comes in vibe coding stuff" is the leader. Which I'm past it, I don't do office politics anymore, I've got a six-fig job, no need to climb, I'm coasting. Debt is really the only problem I have.

UncleOxidant · 3 months ago
That's not the kind of async programming I was expecting.
stahorn · 3 months ago
I was ready for a deep-dive into things like asyncio in python; where it came from and what problems it promised to solve!
dboreham · 3 months ago
Those of us who worked in hardware, or are old programmers will find this familiar. Chip/board routing jobs that took days to complete. Product build/test jobs that took hours to run.

See also that movie with Johnny Depp where AI takes over the world.

ankrgyl · 3 months ago
(Author here)

Hi everyone, thanks for the spirited debate! I think there are some great points in the discussion so far. Some thoughts:

* "This didn't work for offshoring, why will it work all of a sudden?" I think there are good lessons to draw from offshoring around problem definition and what-not but the key difference is the iteration speed. Agents allow you to review stuff much faster, and you can look at smaller pieces of incremental work.

* "I thought this would be about async primitives in python, etc" Whoops sorry, I can understand how the name is confusing/ambiguous! The use of "async" here refers to the fact that I'm not synchronously looking at an IDE while writing code all the time.

* "You can only do this because you used to handwrite code". I don't think this workflow is a replacement for handwriting code. I still love doing that. This workflow just helps me do more.

lelanthran · 3 months ago
I think this does not bode well for you; be honest with yourself - if you're making simple mistakes like using the term "Async Programming" to refer to something new, your prompting and/or code reviewing is probably not going all that well.

Sure, it can look good now, when there's no legacy, but if you ever move into having to maintain that code you're going to be in a tough spot.

datadrivenangel · 3 months ago
I do think that AI will work well compared to the low end of offshoring, where to get good results you need people who could do the work themselves tightly involved. AI will give you slop code faster and cheaper, and that is sometimes enough.

The question is how it compares to the medium level of offshoring. Near term I think that at comparable cost ($100s of dollars per week), it'll give faster results at an acceptable tradeoff in quality for most uses. I don't think most companies want to spend thousands of dollars a month on developer tools per developer though... even though they often do.

ankrgyl · 3 months ago
It's just a different workflow IMO. AI is effectively real-time, whereas offshoring, no matter the quality, is something you have to do in batches.