Readit News logoReadit News
das-bikash-dev commented on Show HN: Emdash – Open-source agentic development environment   github.com/generalaction/... · Posted by u/onecommit
das-bikash-dev · 14 days ago
the worktree isolation per agent is clever. i've been running claude code as my sole dev partner on a production platform (10 repos) and the biggest unlock was treating context as infrastructure — curated reference docs the agent reads on-demand rather than dumping everything into context. re: the longevity question in the thread — i think the orchestration layer stays relevant as long as you're coordinating across repos/services, not just within a single codebase.
das-bikash-dev commented on Show HN: Sgai – Goal-driven multi-agent software dev (GOAL.md → working code)   github.com/sandgardenhq/s... · Posted by u/sandgardenhq
das-bikash-dev · 14 days ago
the DAG decomposition approach is interesting — curious how it handles goals that span multiple services/repos. i build a multi-service platform solo with claude code and the hardest part isn't the coding, it's knowing which files across which repos need to change for a given goal. do you see sgai supporting multi-repo goals, or is it scoped to single-repo for now?
das-bikash-dev commented on Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHub   github.com/Intrect-io/Ope... · Posted by u/unohee
das-bikash-dev · 14 days ago
the context isolation approach is smart — cascading drift between agents is a real problem. i run 10 microservices with claude code and solved a similar issue by maintaining curated reference docs that agents read on-demand per task area instead of loading everything. the model escalation on failure (haiku → sonnet) is a nice touch too. do you find the lancedb memory layer actually helps with repeated similar tasks, or is it more useful for the code knowledge graph side?
das-bikash-dev commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
das-bikash-dev · 16 days ago
Interesting to see the evolution mapped out like this. For those building on top of these models (RAG systems, agent frameworks), the real inflection point wasn't just model count but the shift from completion-only to reasoning and structured output capabilities. Are you planning to add annotations for capability changes alongside release dates?
das-bikash-dev commented on Show HN: Emdash – Open-source agentic development environment   github.com/generalaction/... · Posted by u/onecommit
das-bikash-dev · 16 days ago
How does Emdash handle state management when running multiple agents on the same codebase? Particularly interested in how you prevent conflicts when agents are making concurrent modifications to dependencies or config files. Also, does it support custom agent wrappers, or do you require the native CLI?
das-bikash-dev commented on Show HN: enveil – hide your .env secrets from prAIng eyes   github.com/GreatScott/env... · Posted by u/parkaboy
enjoykaz · 16 days ago
The JSONL logs are the part this doesn't address. Even if the agent never reads .env directly, once it uses a secret in a tool call — a curl, a git push, whatever — that ends up in Claude Code's conversation history at `~/.claude/projects/*/`. Different file, same problem.
das-bikash-dev · 16 days ago
This matches my experience. I work across a multi-repo microservice setup with Claude Code and the .env file is honestly the least of it.

The cases that bite me:

1. Docker build args — tokens passed to Dockerfiles for private package installs live in docker-compose.yml, not .env. No .env-focused tool catches them.

2. YAML config files with connection strings and API keys — again, not .env format, invisible to .env tooling.

3. Shell history — even if you never cat the .env, you've probably exported a var or run a curl with a key at some point in the session.

The proxy/surrogate approach discussed upthread seems like the only thing that actually closes the loop, since it works regardless of which file or log the secret would have ended up in.

das-bikash-dev commented on Show HN: AgentBudget – Real-time dollar budgets for AI agents   github.com/sahiljagtap08/... · Posted by u/sahiljagtapyc
das-bikash-dev · 16 days ago
The multi-agent budget problem you're describing gets even harder when the services are heterogeneous. In a RAG pipeline, a single user query might hit: query analysis (LLM call), embedding generation (different model/pricing), reranking (yet another model), and response generation (LLM call) — each potentially in a different process.

Per-call monkey-patching sees each call in isolation. What I ended up doing was a trace-based approach: every request gets a trace ID, each service appends cost spans asynchronously, and a separate enrichment step aggregates the total. The hard part was deduplication — when service A reports an aggregate cost and service B reports the individual calls that compose it, you need to reconcile or you double-count.

Your atomic disk writes for halt state is a nice pattern. I went with fire-and-forget (never block the request path, accept eventual consistency on cost data) but that means you can't do hard enforcement mid-request like AgentBudget does.

das-bikash-dev commented on Show HN: Falcon – Chat-first communities built on Bluesky AT Protocol    · Posted by u/JohannaWeb
das-bikash-dev · 16 days ago
Re: when to add a WebSocket gateway vs keeping it in the monolith —

I've built multi-channel chat infrastructure and the honest answer is: keep the monolith until you have a specific scaling bottleneck, not a theoretical one.

One pattern that helped was normalizing all channel-specific message formats into a single internal message type early. Each channel adapter handles its own quirks (some platforms give you 3 seconds to respond, others 20, some need deferred responses) but they all produce the same normalized message that the core processing pipeline consumes. This decoupling is what made it possible to split later without rewriting business logic.

On Redis pub/sub specifically: for a solo dev, skip it until you actually have multiple server instances that need to share state. A single process with WebSocket sessions in memory is fine for early users. The complexity cost of pub/sub isn't worth it until you need horizontal scaling or have a separate worker process pushing messages.

What's your current message volume like? That usually determines timing better than architecture diagrams.

das-bikash-dev commented on Show HN: L88 – A Local RAG System on 8GB VRAM (Need Architecture Feedback)    · Posted by u/adithyadrdo
das-bikash-dev · 16 days ago
Nice project, especially given the VRAM constraints. A few things I've learned building production RAG that might help:

1. Separate your query analysis from retrieval. A single LLM call can classify the query type, decide whether to use hybrid search, and pick search parameters all at once. This saves a round-trip vs doing them sequentially.

2. If you add BM25 alongside vector search, the blend ratio matters a lot by query type. Exact-match queries need heavy keyword weighting, while conceptual questions need more embedding weight. A static 50/50 split leaves performance on the table.

3. For your evaluator/generator being the same model — one practical workaround is to skip LLM-as-judge evaluation entirely and use a small cross-encoder reranker between retrieval and generation instead. It catches the cases where vector similarity returns semantically related but not actually useful chunks, and it gives you a relevance score you can threshold on without needing a separate evaluation model.

4. Consider a two-level cache: exact match (hash the query, short TTL) plus a semantic cache (cosine similarity threshold on the query embedding, longer TTL). The semantic layer catches "how do I X" vs "what's the way to X" without hitting the retriever again.

What model are you using for generation on the 8GB? That constraint probably shapes a lot of the architecture choices downstream.

u/das-bikash-dev

KarmaCake day3February 21, 2026
About
Building cuneiform.chat — AI agent platform with RAG. Python/FastAPI, microservices.
View Original