Architecture: 4 agents on OpenClaw (open source), running on WSL2 at home with 25 systemd timers.
What they do every day:
- Generate 8 social posts across platforms (quality-gated: generate → self-review → rewrite if score < 7/10) - Engage with community posts and auto-reply to comments (context-aware, max 2 rounds) - Research via RSS + HN API + Jina Reader → feed intelligence back into content - Run UltraProbe (AI security scanner) for lead generation - Monitor 7 endpoints, flag stale leads, sync customer data - Auto-post blog articles to Discord when I git push (0 LLM tokens — uses commit message directly)
The token optimization trick: agents never have long conversations. Every request is (1) read pre-computed intelligence files (local markdown, 0 tokens), (2) one focused prompt with all context injected, (3) one response → parse → act → done. The research pipeline (RSS, HN, web scraping) costs 0 LLM tokens — it's pure HTTP + Jina Reader. The LLM only touches creative/analytical work.
Real numbers:
- 27 automated Threads accounts, 12K+ followers, 3.3M+ views - 25 systemd timers, 62 scripts, 19 intelligence files - RPD utilization: 7% (105/1,500) — 93% headroom left - Monthly cost: $0 LLM + ~$5 infra (Vercel hobby + Firebase free)
What went wrong:
- $127 Gemini bill in 7 days. Created an API key from a billing-enabled GCP project instead of AI Studio. Thinking tokens ($3.50/1M) with no rate cap. Lesson: always create keys from AI Studio directly. - Engagement loop bug: iterated ALL posts instead of top N. Burned 800 RPD in one day and starved everything else. - Telegram health check called getUpdates, conflicting with the gateway's long-polling. 18 duplicate messages in 3 minutes.
The site (https://ultralab.tw) is fully bilingual (zh-TW/en) with 21 blog posts, and yes — the i18n, blog publishing, and Discord notifications are all part of the automated pipeline.
Live agent dashboard: https://ultralab.tw/agent
Stack: OpenClaw, Gemini 2.5 Flash (free), WSL2/systemd, React/TypeScript/Vite, Vercel, Firebase, Telegram Bot, Resend, Jina Reader.
GitHub (playbook): https://github.com/UltraLabTW/free-tier-agent-fleet
Happy to answer questions about the architecture, token budgeting, or what it's actually like running AI agents 24/7 as a one-person company.
What percentage of your interaction do you want/think is actually real people, and not just agents talking to other agents?
Generate the posts with AI so it can free up your time to interact with people replying to the post.
Or write the bigger, longer, more content posts yourself with maybe some AI assistance in places here and there then use AI to create smaller posts from your larger posts. Still keeping with the human interaction with those that reply to the posts.
My engagement scripts do auto-reply to comments on my own posts, but they're rate-limited and context-aware (max 2 rounds). For anything meaningful — client conversations, community discussions like this one — it's always me.
Deleted Comment
FWIW, I'm a solo dev in Taiwan trying to make AI tools more accessible here. Mobile penetration is nearly universal but AI adoption is still very early. I'm learning as I build.
The pattern I've found that works: AI handles everything that is deterministic + repeatable (data enrichment, email drafting, research, report generation), humans handle anything requiring judgment under uncertainty (pricing conversations, hiring, partnerships).
One thing worth noting on the free tier angle: the token costs are real but often smaller than expected. Summarizing a company's homepage and generating a personalized first line for an outreach email costs about $0.003 with Claude Haiku. At 1000 leads a month that is $3 in LLM costs. The expensive part is always the data layer, not the AI layer - a verified email still costs $0.05-0.15 from any major enrichment provider.
What does your outbound workflow look like? Curious how agents handle prospect qualification.
Also found that parsing script/meta tags directly from the homepage beats any third-party data source for tech stack detection. HubSpot, Salesforce, Stripe, Intercom all leave distinctive fingerprints in page source. Zero API calls, zero cost.
Built something similar for B2B prospecting (batch mode — 50 companies at once). Ended up with almost the same architecture: HTTP scraping at 0 LLM tokens, LLM only for the synthesis step at the end. The bottleneck is rate limiting from the target sites, not the LLM.
One thing I'd add: on the engagement loop bug you mentioned — I ran into the same thing early on. The fix was processing only items where a "last_engaged" timestamp was >N hours ago before feeding to the agent. Simple filter, saved a lot of wasted runs.
The last_engaged timestamp filter is a good catch. We had a similar issue with our scheduling system where a stuck "busy" state caused the entire pipeline to freeze for over a day. Ended up adding a simple guard: any early exit from a busy state must release the lock first. Same philosophy — one cheap filter prevents a cascade of wasted work.
MindThread: Threads automation SaaS with paying subscribers (only one in Taiwan using Meta's official API) UltraProbe: AI security scanner covering OWASP LLM Top 10 — free tool, used for lead gen Client projects: SaaS platforms, AI integrations, brand sites — real businesses using these daily The agents aren't the product — they're the operational layer that lets one person run all of this. Whether that's valuable is a fair debate, but the outputs are real products with real users.
You're spending 7% of your free tier limit just to keep an "audience" of 27 accounts on life support. The real question is: how many of those 12k followers actually convert to revenue instead of just sitting there as dead weight? If the ROI from these accounts doesn't even cover the engineering hours you spend babysitting those 62 scripts, this isn't a business, it's just a hobby
Deleted Comment
The visualization of what the agents are up to in the "office" on the dashboard is incredibly cute.
Deleted Comment