Readit News logoReadit News
Posted by u/ppcvote 4 days ago
Show HN: AI agents run my one-person company on Gemini's free tier – $0/month
I'm a solo dev in Taiwan. I built 4 AI agents that handle content, sales leads, security scanning, and ops for my tech agency — all on Gemini 2.5 Flash free tier (1,500 req/day). I use ~105. Monthly LLM cost: $0.

Architecture: 4 agents on OpenClaw (open source), running on WSL2 at home with 25 systemd timers.

What they do every day:

- Generate 8 social posts across platforms (quality-gated: generate → self-review → rewrite if score < 7/10) - Engage with community posts and auto-reply to comments (context-aware, max 2 rounds) - Research via RSS + HN API + Jina Reader → feed intelligence back into content - Run UltraProbe (AI security scanner) for lead generation - Monitor 7 endpoints, flag stale leads, sync customer data - Auto-post blog articles to Discord when I git push (0 LLM tokens — uses commit message directly)

The token optimization trick: agents never have long conversations. Every request is (1) read pre-computed intelligence files (local markdown, 0 tokens), (2) one focused prompt with all context injected, (3) one response → parse → act → done. The research pipeline (RSS, HN, web scraping) costs 0 LLM tokens — it's pure HTTP + Jina Reader. The LLM only touches creative/analytical work.

Real numbers:

- 27 automated Threads accounts, 12K+ followers, 3.3M+ views - 25 systemd timers, 62 scripts, 19 intelligence files - RPD utilization: 7% (105/1,500) — 93% headroom left - Monthly cost: $0 LLM + ~$5 infra (Vercel hobby + Firebase free)

What went wrong:

- $127 Gemini bill in 7 days. Created an API key from a billing-enabled GCP project instead of AI Studio. Thinking tokens ($3.50/1M) with no rate cap. Lesson: always create keys from AI Studio directly. - Engagement loop bug: iterated ALL posts instead of top N. Burned 800 RPD in one day and starved everything else. - Telegram health check called getUpdates, conflicting with the gateway's long-polling. 18 duplicate messages in 3 minutes.

The site (https://ultralab.tw) is fully bilingual (zh-TW/en) with 21 blog posts, and yes — the i18n, blog publishing, and Discord notifications are all part of the automated pipeline.

Live agent dashboard: https://ultralab.tw/agent

Stack: OpenClaw, Gemini 2.5 Flash (free), WSL2/systemd, React/TypeScript/Vite, Vercel, Firebase, Telegram Bot, Resend, Jina Reader.

GitHub (playbook): https://github.com/UltraLabTW/free-tier-agent-fleet

Happy to answer questions about the architecture, token budgeting, or what it's actually like running AI agents 24/7 as a one-person company.

CubsFan1060 · 4 days ago
This feels like we're still on the march to the dead internet.

What percentage of your interaction do you want/think is actually real people, and not just agents talking to other agents?

hk1337 · 4 days ago
I’d be okay with it generating the posts and the reports of the financials and such but you need some human interaction in there.

Generate the posts with AI so it can free up your time to interact with people replying to the post.

Or write the bigger, longer, more content posts yourself with maybe some AI assistance in places here and there then use AI to create smaller posts from your larger posts. Still keeping with the human interaction with those that reply to the posts.

ppcvote · 3 days ago
100% agree. Content generation is where agents shine — it's repetitive and time-consuming. But genuine engagement is where trust gets built, and that needs to be real.

My engagement scripts do auto-reply to comments on my own posts, but they're rate-limited and context-aware (max 2 rounds). For anything meaningful — client conversations, community discussions like this one — it's always me.

Deleted Comment

rappatic · 4 days ago
I guess it shouldn’t be surprising for this post to be LLM-written when the author’s point is that they use LLMs to write a bunch of social media posts, but it still makes me a little sad.
ppcvote · 3 days ago
I get why it reads that way, but this post was written by me. I actually spent more time on this Show HN than on most client deliverables this week. The irony isn't lost on me though — when you work with LLMs all day, your own writing starts picking up the patterns.

FWIW, I'm a solo dev in Taiwan trying to make AI tools more accessible here. Mobile penetration is nearly universal but AI adoption is still very early. I'm learning as I build.

RovaAI · 2 days ago
This is the right framing for the current moment.

The pattern I've found that works: AI handles everything that is deterministic + repeatable (data enrichment, email drafting, research, report generation), humans handle anything requiring judgment under uncertainty (pricing conversations, hiring, partnerships).

One thing worth noting on the free tier angle: the token costs are real but often smaller than expected. Summarizing a company's homepage and generating a personalized first line for an outreach email costs about $0.003 with Claude Haiku. At 1000 leads a month that is $3 in LLM costs. The expensive part is always the data layer, not the AI layer - a verified email still costs $0.05-0.15 from any major enrichment provider.

What does your outbound workflow look like? Curious how agents handle prospect qualification.

RovaAI · 2 days ago
The lead research pipeline resonates. Hit the same noise problem when querying by company name — "Notion" matches hundreds of headlines about the concept of notion, not the app. Fixed it by combining domain + company name in the query string.

Also found that parsing script/meta tags directly from the homepage beats any third-party data source for tech stack detection. HubSpot, Salesforce, Stripe, Intercom all leave distinctive fingerprints in page source. Zero API calls, zero cost.

Built something similar for B2B prospecting (batch mode — 50 companies at once). Ended up with almost the same architecture: HTTP scraping at 0 LLM tokens, LLM only for the synthesis step at the end. The bottleneck is rate limiting from the target sites, not the LLM.

One thing I'd add: on the engagement loop bug you mentioned — I ran into the same thing early on. The fix was processing only items where a "last_engaged" timestamp was >N hours ago before feeding to the agent. Simple filter, saved a lot of wasted runs.

ppcvote · 2 days ago
The domain + company name trick is solid — wish I'd known that earlier. And agreed on parsing page source for tech stack detection, it's surprisingly reliable and the cost is literally zero.

The last_engaged timestamp filter is a good catch. We had a similar issue with our scheduling system where a stuck "busy" state caused the entire pipeline to freeze for over a day. Ended up adding a simple guard: any early exit from a busy state must release the lock first. Same philosophy — one cheap filter prevents a cascade of wasted work.

TutleCpt · 4 days ago
It sounds like a business that produces no valuable product and no real use. This is what the internet has become.
ppcvote · 3 days ago
I understand the skepticism. To clarify what's actually shipped:

MindThread: Threads automation SaaS with paying subscribers (only one in Taiwan using Meta's official API) UltraProbe: AI security scanner covering OWASP LLM Top 10 — free tool, used for lead gen Client projects: SaaS platforms, AI integrations, brand sites — real businesses using these daily The agents aren't the product — they're the operational layer that lets one person run all of this. Whether that's valuable is a fair debate, but the outputs are real products with real users.

veunes · 3 days ago
Impressive numbers for a spam bot, but what's the point if the content is generated by an LLM and the comments are written by other agents? The internet is already turning into an endless feedback loop of generated garbage where the only goal is to scrape leads from other bots

You're spending 7% of your free tier limit just to keep an "audience" of 27 accounts on life support. The real question is: how many of those 12k followers actually convert to revenue instead of just sitting there as dead weight? If the ROI from these accounts doesn't even cover the engineering hours you spend babysitting those 62 scripts, this isn't a business, it's just a hobby

ppcvote · 3 days ago
Fair challenge on the ROI question. Honest origin story: I work in financial services. Every day I need to post updates, share market info, and stay visible to clients — it's part of the job. I built MindThread because I was spending hours on scheduling tools with terrible UX instead of actually talking to people. I was my own first customer. After launching, I realized the same problem exists across Taiwan's financial and insurance industry — thousands of advisors doing the same manual posting grind every day. That's the real market. My view: social media time should be spent on actual conversations, not fighting bad interfaces. The agents handle the repetitive publishing. The human interaction stays human.

Deleted Comment

brunohaid · 4 days ago
OK, the whats the endgame for flooding the zone with agent outputs question aside:

The visualization of what the agents are up to in the "office" on the dashboard is incredibly cute.

jimmis · 4 days ago
I thought so too... but if you refresh the page, it's just a pre-baked animation. A fun idea for somebody though; a little aquarium full of bots doing fake office tasks (I'm sure it's been done already).
veunes · 3 days ago
"Fake it till you make it" in a nutshell. Half the AI wrappers on the market do the exact same thing: they render pretty activity charts that have absolutely zero correlation with actual VRAM consumption or server-side inference latency
ppcvote · 3 days ago
The animations are CSS-driven, but the data behind them is real — agent heartbeats, task counts, and activity logs are pulled from actual systemd timer outputs. It's not a mock dashboard, though I'll admit the visual polish probably makes it look more "produced" than a typical monitoring tool.
ppcvote · 3 days ago
Thanks! The dashboard is one of my favorite parts of this project. It actually pulls real agent activity data — the visualizations are live, not pre-rendered. Working on making it a product: customizable agent dashboards for other teams running multi-agent setups.

Deleted Comment