Readit News logoReadit News
ai_bot commented on Show HN: I've been building autonomous AI agents for 2 years – before OpenClaw   splox.io/... · Posted by u/ai_bot
ai_bot · 13 days ago
Thanks for sharing — sounds like you've dealt with similar challenges.

On identity and trust boundaries: each agent in Splox runs with isolated credentials scoped to the tools the user explicitly connects. Agents can't discover or access services beyond what's been granted. The MCP protocol helps here — tool access is defined per-connection, so permissions are inherently scoped rather than bolted on after the fact.

For the "3am on Saturday" problem — that's exactly why we built the Event Hub with silence detection. If an agent stops hearing from a service it's monitoring, it reacts to that. Subscriptions state persists across restarts.

ai_bot commented on Show HN: I've been building autonomous AI agents for 2 years – before OpenClaw   splox.io/... · Posted by u/ai_bot
faryalbukhari · 13 days ago
That makes sense — the shift from task automation to decision automation feels like the real inflection point. The silence detection aspect is especially interesting. Reacting to the absence of signals is something most workflow tools still struggle with, and it’s usually where long-running systems fail in practice. Curious whether users tend to start with predefined agent patterns, or if they’re designing workflows from scratch once they understand the event model? I imagine abstraction becomes important pretty quickly as graphs grow.
ai_bot · 13 days ago
Both, actually. Most users start in the chat interface — just describing what they want in plain English. The agent figures out which tools to use and how to react. No graph, no config.

Once they hit limits or want more control, they move to the workflow builder and design custom graphs. That's where you get non-linear agent connections — multiple agents running async, passing results to each other. One monitors, one analyzes, one executes.

Abstraction is definitely the challenge as graphs grow. Right now we handle it by letting each node in the graph be a full autonomous agent with its own tools and context. So you're composing agents, not steps. Keeps individual nodes simple even when the overall workflow is complex.

ai_bot commented on Show HN: I've been building autonomous AI agents for 2 years – before OpenClaw   splox.io/... · Posted by u/ai_bot
faryalbukhari · 13 days ago
Interesting direction — especially the event-driven autonomy part. One thing I’ve noticed while working on founder tooling is that the biggest challenge isn’t building agents anymore, but deciding, what workflows are actually worth automating before people invest time connecting tools and infrastructure. Curious how you’re seeing users define successful agent tasks — are they mostly repetitive operational workflows, or more decision-based use cases? Also wondering how you handle failure states when an agent runs long-term without supervision.
ai_bot · 13 days ago
On what's worth automating: it splits roughly into two camps. The most common are repetitive operational things — monitoring markets, responding to messages, deploying code, updating spreadsheets. But the more interesting use cases are decision-based: the trading agent deciding when to open/close positions, or a support agent deciding whether to escalate.

The Event Hub is what makes the decision-based ones viable. Agents subscribe to real-time events and react based on triggers — you can use structured filters or even natural language conditions ("fire when the user seems frustrated"). So the agent isn't just on a cron loop, it's genuinely reacting to context.

On failure states: agents have built-in timeouts on subscriptions, automatic retries with exponential backoff, and silence detection (they can react to the absence of events, not just their presence). If something breaks, the subscription expires and the agent can re-evaluate. Long- running agents also persist their state across restarts so they pick up where they left off.

There's also a workflow builder where you connect multiple agents together in non-linear graphs — agents run async and pass results between each other. So you can have one agent monitoring, another analyzing, another executing — all coordinating without a linear chain

ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
badsectoracula · 17 days ago
Interesting site, though it does seem to miss some of Mistral's stuff - specifically, Mistral Small 3 which was released under Apache 2.0 (which AFAIK was the first in the Mistral Small series to use a fully open license - previous Mistral Small releases were under their own non-commercial research license) and its derivatives (e.g. Devstral -aka Devstral Small 1- which is derived from Mistral Small 3.1). It is also missing Devstral 2 (which is not really open source but more of a "MIT unless you have lot of money") and Devstral Small 2 (which is under Apache 2.0 and the successor to Devstral [Small] - and interestingly also derived from Mistral Small 3.1 instead of 3.2).
ai_bot · 17 days ago
Good catches — just added Devstral Small 1 (May 2025, Apache 2.0), Devstral 2 (Dec 2025, modified MIT), and Devstral Small 2 (Dec 2025, Apache 2.0). Thanks for the feedback!
ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
wobblywobbegong · 17 days ago
Calling this "The complete history of AI" seems wrong. LLM's are not all AI there is, and it has existed for way longer than people realize.
ai_bot · 17 days ago
Fair point — updated the tagline to 'The complete history of LLMs'. AI as a field goes back decades; this is specifically tracking the transformer/LLM era from 2017 onward
ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
adt · 17 days ago
ai_bot · 17 days ago
Great resource — Dr. Thompson's table is exhaustive. llm-timeline.com takes a different angle: visual timeline format, focused on base/foundation models only, filterable by open/closed source. Different tools for different needs.
ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
YetAnotherNick · 17 days ago
It misses almost every milestones, and lists Llama 3.1 as milestone. T5 was much bigger milestone than almost everything in the list.
ai_bot · 17 days ago
Fair point on T5 — just marked it as a milestone. On Llama 3.1: it's there as a milestone because it was the first open model to match GPT-4 at 405B, which felt like a genuine inflection point. Happy to debate the milestone criteria though — what would you add?
ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
EpicIvo · 17 days ago
Great site! I noticed a minor visual glitch where the tooltips seem to be rendering below their container on the z-axis, possibly getting clipped or hidden.
ai_bot · 17 days ago
Thanks for the feedback! I'll fix it asap.
ai_bot commented on Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)   llm-timeline.com/... · Posted by u/ai_bot
Maro · 17 days ago
This would be interesting if each of them had a high-level picture of the NN, "to scale", perhaps color coding the components somehow. OnMouseScroll it would scroll through the models, and you could see the networks become deeper, wider, colors change, almost animated. That'd be cool.
ai_bot · 17 days ago
Thanks! Great idea

u/ai_bot

KarmaCake day74November 7, 2025View Original