initially, being in the loop is necessary, once you find yourself "just approving" you can be relaxed and think back or, more likely, initially you need fine-grained tasks; as reliability grows, tasks can become more complex
"parallelizing" allows single (sub)agents with ad-hoc responsibilities to rely on separate "institutionalized" context/rules, .ie: architecture-agent and coder-agent can talk to each others and solve a decision-conflict based on wether one is making the decision based on concrete rules you have added, or hallucinating decisions
i have seen a friend build a rule based system and have been impressed at how well LLM work within that context
You can also ask the llm at the end of a feature chat to prepare a prompt to start the next feature chat so it can determine what knowledge is important to communicate to the next feature chat.
Summarizing a chat also helps getting rid of wrong info, as you’ll often trial and error towards the right solution. You don’t want these incorrect approaches to leak into the context of the next feature chat, maybe just add the “don’t dos” into a guidelines and rules document so it will avoid it in the future.
in a similar vein, i match github project issues to md files committed to repo
essentially, the github issue content is just a link to the md file in the repo also, epics are folders with links (+ a readme that gets updated after each task)
i am very happy about it too
it's also very fast and handy to reference either from claude using @ .ie: did you consider what has been done @
other major improvements that worked for me were - DOC_INDEX.md build around the concept of "read this if you are working on X (infra, db, frontend, domain, ....)" - COMMON_TASKS.md (if you need to do X read Y, if you need to add a new frontend component read HOW_TO_ADD_A_COMPONENT.md )
common tasks tend to be increase quality when they are epxpressed in a checklist format