Readit News logoReadit News
hetspookjee · 3 days ago
Over the last 2 weeks (evenings only) I've spend a lot of time crafting the "perfect prompt" for claude code to one shot the project. I ended up with a rather small CLAUDE.md file that references 8 other MD files, ranging from project_architecture, models_spec, build_sequence, test_hierarchy, test_scenarios, and some other files.

It is a project for model based governance of Databricks Unity Catalog, with which I do have quite a bit of experience, but none of the tooling feels flexible enough.

Eventually I ended up with 3 different subagents that supported in the development of the actual planning files; a Databricks expert, a Pydantic expert, and a prompt expert.

The improvement on the markdown files was rather significant with the aid of these. Ranging from old pydantic versions and inconsistencies, to me having some misconceptions about unity catalog as well.

Yesterday eve I gave it a run and it ran for about 2 hours with me only approving some tool usage, and after that most of the tools + tests were done.

This approach is so different than I how used to do it, but I really do see a future in detailed technical writing and ensuring we're all on the same page. In a way I found it more productive than going into the code itself. A downside I found is that with code reading and working on it I really zone in. With a bunch of markdown docs I find it harder to stay focused.

Curious times!

a_bonobo · 3 days ago
I feel we're developing something like what made Test-Driven Development so strong: TTD forced you to sit down and design your system first, rather than making it all up on the fly. In the past we mapped the system while we were building the code for it.

This kind of AI-driven development feels very similar to that. By forcing you to sit down and map the territory you're planning to build in, the coding itself becomes secondary, just boilerplate to implement the design decision you've made. And AI is great at boilerplate!

mattmanser · 3 days ago
I feel TDD ended up fizzling out quite a bit in the industry, with some evangelists later admitting they'd taken to often writing the code first, then the tests.

To me it's always felt like waterfall in disguise and just didn't fit how I make programs. I feel it's just not a good way to build a complex system with unknown unknowns.

That the AI design process seems to rely on this same pattern feels off to me, and shows a weakness of developing this way.

It might not matter, admittedly. It could be that the flexibility of having the AI rearchitect a significant chunk of code on the fly works as a replacement to the flexibility of designing as you go.

hetspookjee · 3 days ago
That is exactly what this felt like indeed! I found a lot of interest in both refining the test strategy and test decisions, but when it started implementing some core functions were in fact lost in the process. This rather leaky memory still suprises me every now and then. Especially 'undoing' things is a big challenge as the (do not) kind of route versus the (do) route is so much more confusing for the LLM, it seems.
danmaz74 · 3 days ago
> "TTD forced you to sit down and design your system first, rather than making it all up on the fly"

It's interesting because I remember having discussions with a colleague who was a fervent proponent of TDD where he said that with that approach you "just let the tests drive you" and "don't need to sit down and design your system first" (which I found a terrible idea).

jmull · 3 days ago
Test-driven and prompt-driven development aside, I never understood why people (and groups) spend many hours (or 1000s, or 10000s of hours) building things when they don't really know what they're building.

(I've certainly seen it done though, with predicable result.)

samrus · 3 days ago
thats a great way to put it. the LLMs can't design things, thats way too above their capabilities. they can pretend to design things and even fool people, but they're jsut regurgitating other designs from their training data (and for a todo app, thats enough). but it we do the design for them, they're really really good at putting meat on that skeleton

Dead Comment

m_fayer · 3 days ago
Long after we are all gone and the scrum masters are a barely remembered historical curiosity, there shall remain, humble and eternal, the waterfall model.
actionfromafar · 3 days ago
A waterfall, frozen, in time?
razemio · 3 days ago
That is exactly my issue. I am more districted while being more productive. It feels just wrong, but works for now. In the long run, I need to find a solution for this. What works best for now, is to let multiple agents run on multiple repos of the same project solving different tasks. This way, I stay somewhat focused, since I constantly need to approve things. Just like a Projekt Manager with a big team... Indeed curious times.
ionwake · 3 days ago
I agree I think this is the way
mprivat · 3 days ago
That's pretty novel. What framework is actually running the agents in your experiment?
hetspookjee · 3 days ago
Its just the auto generated sub-agents from Claude Code https://docs.anthropic.com/en/docs/claude-code/sub-agents

I plan to do a more detailed write down sometime next week or the week after when I've "finished" my 100% vibe coded website.

brainless · 3 days ago
These days, I record product details, user journey, etc. with voice, and kick off the product technical details documentation process. Minimal CLAUDE.md. GitHub based workflow for software development process. I am struggling with generating good CI workflows, on it.

Here is my playbook: https://nocodo.com/playbook/

zuInnp · 3 days ago
What I don't get about all the "if you plan it out first, it gets better" approach is, how did they work before?!

For anything bigger than small size features, I always think about what I do and why I do things. Sometimes in my head, sometimes on paper, a Confluence page or a white board.

I don't really get it. 80 % of software engineering is to figure out what you need and how to achieve this. You check with the stake holders, write down the idea of what you want to do and WHY you want to do it. You do some research.

Last 20 % of the process is coding.

This was always the process. You don't need AI for proper planning and defining your goals.

divan · 3 days ago
That might be true for large dev teams with an established culture. But a lot of development is happening in different settings - solo projects, small teams, weekend side-projects, personal tools crafting, quick POC coding, etc. Not all software is a complex product that needs to be sold and maintained. One thing that I always loved about being a developer is that you can create any custom piece of software you need for yourself – even if it's for a single-time task - and don't care about releasing/supporting corner cases/other users.

In almost all these cases, development process is a mix of coding & discovering, updating the mental model of the code on the go. It almost never starts with docs, spec or tests. Some projects are good for TDD, but some don't even need it.

And even for these use-cases, using AI coding agents changes the game here. Now it does really matter to first describe the idea, put it into spec, and verbalize everything in your head that you think will matter for the project.

Nowadays, the hottest programming language is English, indeed.

Scarblac · 3 days ago
I think I usually mix the coding and the designing more. Start coding something, then keep shaping and improving it for a while until it's good.

And of course for most things, there's a pretty obvious way it's probably going to work, no need to spend much time on that.

ticoombs · 3 days ago
I used to joke about prompt engineering. But by jiminy it is a thing now. I swear sometimes I waste a good 10-20minutes writing up a good prompt and initial plan just so that claudecode can systematically implement something.

My usage is nearly the same as OP. Plan plan plan save as a file and then new context and let it rip.

That's the one thing I'd love, a good cli (currently using charm and cc) which allows me to have an implementation model, a plan model and (possibly) a model per sub agent. Mainly so I can save money by using local models for implementation and online for plans or generation or even swapping back. Charm has been the closest I've used so far allowing me to swap back and forth and not lose context. But the parallel sub-agent feature is probably one of the best things claudecode has.

(Yes I'm aware of CCR, but could never get it to use more than the default model so :shrug:)

NitpickLawyer · 3 days ago
> I used to joke about prompt engineering. But by jiminy it is a thing now.

This is the downside of living in a world of tweets, hot takes and content generation for the sake of views. Prompt engineering was always important, because GIGO has always been a ground truth in any ML project.

This is also why I encourage all my colleagues and friends to try these tools out from time to time. New capabilities become aparent only when you try them out. What didn't work 6mo ago has a very good chance of working today. But you need a "feel" for what works and what doesn't.

I also value much more examples, blogs, gists that show a positive instead of a negative. Yes, they can't count the r's in strawberry, but I don't need that! I don't need the models to do simple arithmetic wrong. I need them to follow tasks, improve workflows and help me.

Prompt engineering was always about getting the "google-fu" of 10-15 years ago rolling, and then keeping up with what's changed, what works and what doesn't.

BiteCode_dev · 3 days ago
Projects using AI are the best documented and tested projects I worked on.

They are well documented because you need context for the LLM to be performant. And they are well tested because the cost of producing test got lower since they can be half generated, while the benefit of having tests got higher, since they are guard rails for the machine.

People constantly say code quality is going to plummet because of those tools, but I think the exact opposite is going to happen.

oblio · 3 days ago
I find it funny that we had to invent tools that will replace, say, 20%+ of developers out there to finally have developers to write docs :-))
scastiel · 3 days ago
I agree, prompt engineering really is the foundation of working with AI (whether it’s for coding or anything else).
samrus · 3 days ago
honestly "prompt engineering" is just the vessel for architecting the solution. its like saying "diagram construction" really took off as a skill. its architecting with a new medium
Crowberry · 3 days ago
I’ve recently tried out Claude Code for a bit, I’ll make sure to give the suggested approach a go! It sounds like a nice workflow.

But I’m negatively surprised with the amount of money CC costs. Just a simple refactoring cost me about 5min + 15min review and 4usd, had I done it myself it might have taken 15-20min as well.

How much money do you typically spend on features using CC? Nobody seems to mention this

naiv · 3 days ago
You can sign up for a subscription and pay from $20-$200 flat with some daily/weekly restrictions on token usage.

https://support.anthropic.com/en/articles/11145838-using-cla...

scastiel · 3 days ago
Indeed, switching partially from Cursor to Claude Code increased the bill by a lot! Fortunately I use Claude Code mostly at work and I had no trouble to convince my boss to pay for it. But I’m still not sure how I’ll continue building side projects with Claude Code. Not sure I want to spend $20 each time I want to bootstrap an app in an evening just for fun…
k9294 · 3 days ago
Why not to subscribe to pro or max? I calculated my CC usage this month (I'm on a Max 200$ plan), it’s close to 2.5k$... Its just crazy to use API at price right now.
dustingetz · 3 days ago
the investor bull case in AI is to cannabalize the labor markets at 15% margin, so 1:1 labor:AI budget is where we are headed next - e.g. $100k/100k for a senior dev. The AI share will come out of dev budgets, so expect senior salaries to fall and team sizes to shrink by a lot if this stuff works. Remember we’re in the land grab phase, all subsidized by VCs, but we’re speed running through the stages as and this phase appears to be ending based on twitter VC sentiment. There’s only so many times you can raise another $500M for 9 months of operating cost at -100% gross margin.
fuckaj · 3 days ago
What if once the 100k dev jobs are gone the equivalent value in terms of AI is nowhere near that. Say it is 5k instead?

Due to oversupply. First you needed humans who can code. But now you need scalable compute.

Equivalent would be hiring those people to wave a flag infront of a car. They are replaced bt modern cars, but you dont get to receive the flag wavers wage as captured value for long if at all.

edg5000 · 3 days ago
You either get the 20 EUR/m for Sonnet and 100 for Opus. I used Sonnet and switched to Opus eventually. But Sonnet was also good. For my purposes I don't run out of the token limits, although I can't speak for the future.
stpedgwdgfhgdd · 2 days ago
When you start to refactor 100k+ lines code base it adds up even more… Some Docker builds, run the linters and the tests, you quickly exceed $40
theshrike79 · 2 days ago
This is why you make the linters and tests as quiet as possible. The LLM doesn't need to know every successful test, it just needs know if it passed or not.
viraptor · 3 days ago
> had I done it myself it might have taken 15-20min as well.

Could you spend that 15-20min on some other task while this one works in the background?

conradfr · 3 days ago
Well the 15 minutes code review are still there.
Crowberry · 3 days ago
Not really, or I guess I could if i let it run wild. But I wanted to check what it was doing and had to steer it in other directions sometimes.

I treat it as pair programming with a junior programmer on speed!

mattjenner · 3 days ago
This has been my experience with replit as well. It needs to use design docs as the source of task and truth, as it starts to crumble as the app size increases.

With OpenAI I find ChatGPT just slows to a crawl and the chat becomes unresponsive. Asking it to make a document, to import into a new chat, helps with that.

On a human level, it makes me think that we should do the same ourselves. Reflect, document and dump our ‘memory’ into a working design doc. To free up ourselves, as well as our LLMs.

zemvpferreira · 3 days ago
It’s interesting to me that trying to optimise AI tools is leading many engineers to discover the value in good communication and expectation setting. The diva/autist stereotype of 10x programmers is due for a review.
afro88 · 3 days ago
This is the key to getting decent feature work out of Claude Code. I've had good success recently using GPT-5 High (in Cursor) to write the plan, then take that to Claude Code to implement.

You can get an extra 15-20% out of it if you also document the parts of the codebase you expect to change first. Let the plan model document how it works, architecture and patterns. Then plan your feature with this in the context. You'll get better code out of it.

Also, make sure you review, revise and/or hand edit the docs and plans too. That pays significant dividends down the line.

garciasn · 3 days ago
We have Google Workspace at work and I find Gemini is awesome at “academic style” writeups but less good at writing code compared to CC.

So; I have Gemini write up plans for something, having it go deep and be as explicit as possible in its explanations.

I feed this into CC and have it implement the change in my code base. This has been really strong for me in making new features or expanding upon others where I feel something should be considerably improved.

The product I’ve built from the ground up over the last 8w is now in production and being actively demoed to clients. I am beyond thrilled with my experience and its output. As I’ve mentioned before on HN, we could have done much of this work ourselves with our existing staff, but we could not have done the front end work. What I feel might have taken well over a year and way more engineering and data science effort was mostly done in 2m. Features are added in seconds rather than hours.

I’m amazed by CC and I love reading these articles which help me to realize my own journey is being mirrored by others.

anemic · 3 days ago
I too recently discovered this workflow and I'm blown by it. The key IMHO is first to give claude as low requirements as possible and let it's plan mode roam freely. Writing a reporting for sales metrics? "Ultrathink relevant sales metrics" and it will give you a lot to start ranking which you want, maybe add some that are missing. Then create a new directory for this feature and ask it to write the plan to a file. Then proceed to create an implementation plan, ask it to find all the relevant data from the database and write how to query it. Then finally let it implement it and write tests and end user documentation. And send it to QA.

Need sales forecasting? This used to be an enterprise feature that 10 years ago would have needed a large team to implement correctly. Claude implements a docker container in one afternoon.

It really changes how I see software now. Before there were NDAs and intellectual property and companies too great care not to leak their source code.

Now things have changed, have a complex ERP system that took 20 years to develop? Well, claude can re-implement it in a flash. And write documentation and tests for it. Maybe it doesn't work quite that well yet, but things are moving fast.