Readit News logoReadit News
ukuina commented on NanoClaw now supports Claude's Agent Swarms in containers   twitter.com/Gavriel_Cohen... · Posted by u/spendy_clao
ukuina · 2 days ago
Possible sponsored content being injected by Claude into the NanoClaw repo?

https://github.com/gavrielc/nanoclaw/commit/22eb5258057b49a0...

ukuina commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
jmalicki · 3 days ago
I would love to see setups where $1000/day is productive right now.

I am one of the most pro vibe-coding^H^H^H^H engineering people I know, and i am like "one claude code max $200/mo and one codex $200/mo will keep you super stressed out to keep them busy" (at least before the new generation of models I would hit limits on one but never both - my human inefficiency in tech-leading these AIs was the limit)

ukuina · 3 days ago
Tokens evaporate when you have Agent Swarms. Have you tried Claude Code Teammates, for example?

https://code.claude.com/docs/en/agent-teams

ukuina commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
jaytaylor · 3 days ago
(DTU creator here)

I did have an initial key insight which led to a repeatable strategy to ensure a high level of fidelity between DTU vs. the official canonical SaaS services:

Use the top popular publicly available reference SDK client libraries as compatibility targets, with the goal always being 100% compatibility.

You've also zeroed in on how challenging this was: I started this back in August 2025 (as one of many projects, at any time we're each juggling 3-8 projects) with only Sonnet 3.5. Much of the work was still very unglamorous, but feasible. Especially Slack, in some ways Slack was more challenging to get right than all of G-Suite (!).

Now I'm part way through reimplementing the entire DTU in Rust (v1 was in Go) and with gpt-5.2 for planning and gpt-5.3-codex for execution it's significantly less human effort.

IMO the most novel part to this story is Navan's Attractor and corresponding NLSpec. Feed in a good Definition-of-Done and it'll bounce around between nodes until it gets it right. There are already several working implementations in less than 24 hours since it was released, one of which is even open source [0].

[0] https://github.com/danshapiro/kilroy

ukuina · 3 days ago
Been toying around with DTs myself for a few months. Until December, LLMs couldn't correctly hold large amounts of modeled behavior internally.

Why the switch from Go to Rust?

ukuina commented on Speed up responses with fast mode   code.claude.com/docs/en/f... · Posted by u/surprisetalk
fragmede · 3 days ago
Two? I'd estimate twelve (three projects x four tasks) going at peak.
ukuina · 3 days ago
3-4 parallel projects is the norm now, though I find task-parallelism still makes overlap reduction bothersome, even with worktrees. How did you work around that?
ukuina commented on I've used AI to write 100% of my code for a year as an engineer   old.reddit.com/r/ClaudeCo... · Posted by u/ukuina
ukuina · 4 days ago
From the link:

1- The first few thousand lines determine everything

When I start a new project, I obsess over getting the process, guidelines, and guardrails right from the start. Whenever something is being done for the first time, I make sure it's done clean. Those early patterns are what the agent replicates across the next 100,000+ lines. Get it wrong early and the whole project turns to garbage.

2- Parallel agents, zero chaos

I set up the process and guardrails so well that I unlock a superpower. Running multiple agents in parallel while everything stays on track. This is only possible because I nail point 1.

3- AI is a force multiplier in whatever direction you're already going

If your codebase is clean, AI makes it cleaner and faster. If it's a mess, AI makes it messier faster. The temporary dopamine hit from shipping with AI agents makes you blind. You think you're going fast, but zoom out and you actually go slower because of constant refactors from technical debt ignored early.

4- The 1-shot prompt test

One of my signals for project health: when I want to do something, I should be able to do it in 1 shot. If I can't, either the code is becoming a mess, I don't understand some part of the system well enough to craft a good prompt, or the problem is too big to tackle all at once and needs breaking down.

5- Technical vs non-technical AI coding

There's a big difference between technical and non-technical people using AI to build production apps. Engineers who built projects before AI know what to watch out for and can detect when things go sideways. Non-technical people can't. Architecture, system design, security, and infra decisions will bite them later.

6- AI didn't speed up all steps equally

Most people think AI accelerated every part of programming the same way. It didn't. For example, choosing the right framework, dependencies, or database schema, the foundation everything else is built on, can't be done by giving your agent a one-liner prompt. These decisions deserve more time than adding a feature.

7- Complex agent setups suck

Fancy agents with multiple roles and a ton of .md files? Doesn't work well in practice. Simplicity always wins.

8- Agent experience is a priority

Treat the agent workflow itself as something worth investing in. Monitor how the agent is using your codebase. Optimize the process iteratively over time.

9- Own your prompts, own your workflow

I don't like to copy-paste some skill/command or install a plugin and use it as a black box. I always change and modify based on my workflow and things I notice while building.

10- Process alignment becomes critical in teams

Doing this as part of a team is harder than doing it yourself. It becomes critical that all members follow the same process and share updates to the process together.

11- AI code is not optimized by default

AI-generated code is not optimized for security, performance, or scalability by default. You have to explicitly ask for it and verify it yourself.

12- Check git diff for critical logic

When you can't afford to make a mistake or have hard-to-test apps with bigger test cycles, review the git diff. For example, the agent might use created_at as a fallback for birth_date. You won't catch that with just testing if it works or not.

13- You don't need an LLM call to calculate 1+1

It amazes me how people default to LLM calls when you can do it in a simple, free, and deterministic function. But then we're not "AI-driven" right?

ukuina commented on OpenCiv3: Open-source, cross-platform reimagining of Civilization III   openciv3.org/... · Posted by u/klaussilveira
WildWeazel · 4 days ago
Hi all, OpenCiv3 founder here. Thanks for the support! Check us out on Civfanatics or Discord to keep up with the project.
ukuina · 4 days ago
Would it be feasible to add an API to OpenCiv3 (or run it as an SDK) so we can script up actions?
ukuina commented on Microsoft open-sources LiteBox, a security-focused library OS   github.com/microsoft/lite... · Posted by u/aktau
ukuina · 4 days ago
No deployment instructions?
ukuina commented on Xcode 26.3 – Developers can leverage coding agents directly in Xcode   apple.com/newsroom/2026/0... · Posted by u/davidbarker
mohsen1 · 7 days ago
I built an entire iOS app without opening Xcode UI even once. Why so many iOS engineers prefer XCode?
ukuina · 7 days ago
What do you use instead? I thought Xcode sign-in is necessary for signing apps?

u/ukuina

KarmaCake day1578January 19, 2023View Original