Readit News logoReadit News

Deleted Comment

ajstars commented on Two Years of Emacs Solo   rahuljuliato.com/posts/em... · Posted by u/celadevra_
ajstars · 5 days ago
The solo Emacs path is underrated for building deep understanding. Most people reach for a config framework immediately and end up with a system they can't debug. Starting from scratch forces you to actually understand what each piece does, even if it takes longer upfront.
ajstars commented on Optimizing Top K in Postgres   paradedb.com/blog/optimiz... · Posted by u/philippemnoel
ajstars · 5 days ago
Curious whether you benchmarked against a partial index on the sort column. For fixed-category top-K queries the planner sometimes picks it over a full index scan, though I've seen it regress on high-write tables due to index bloat. Did write volume factor into your test setup?
ajstars commented on Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy   gitlab.redox-os.org/redox... · Posted by u/pjmlp
ajstars · 5 days ago
The interesting tension here is that "no LLM-generated code" is easy to state but hard to enforce - a developer who uses an LLM to understand a concept and then writes the code themselves is indistinguishable from one who didn't. The policy probably works as a cultural signal more than a technical guarantee, which might be exactly what they want.
ajstars commented on I put my whole life into a single database   howisfelix.today/... · Posted by u/lukakopajtic
ajstars · 5 days ago
The hardest part of this kind of personal data system is retrieval not storage. At some point you have more data than fits in a prompt, so you need to decide what's relevant per query. Did you build any ranking or filtering logic, or do you query specific tables directly?
ajstars commented on No, it doesn't cost Anthropic $5k per Claude Code user   martinalderson.com/posts/... · Posted by u/jnord
ajstars · 5 days ago
The compute cost debate misses a subtler point: the real cost multiplier isn't inference, it's context length. Most agent frameworks naively stuff 6-8k tokens into every prompt turn. If you route intelligently and compress memory hierarchically, you can bring that down to 200-400 tokens per turn with no quality loss. The model cost then becomes almost irrelevant.

u/ajstars

KarmaCake day3January 5, 2024
About
AgentOS - AI agent with tiered memory that uses 82% fewer tokens

https://github.com/ajstars1/agent-os

View Original