Readit News logoReadit News
Posted by u/calcsam 22 days ago
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devsgithub.com/mastra-ai/mast...
Hi HN, we're Sam, Shane, and Abhi.

Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.

Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.

If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.

Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.

Agent development is changing quickly, so we’ve added a lot since February:

- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.

- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.

- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.

- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.

(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)

Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.

We'll be around and happy to answer any questions!

swyx · 21 days ago
> That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.

shudders in vietnam war flashbacks congrats on launch guys!!!

for those who want an independent third party endorsement, here's Brex CTO talking about Mastra in their AI engineering stack http://latent.space/p/brex

calcsam · 21 days ago
LOL thanks swyx. Yeah we realized although we _could_ fight that war again...it would be better for everyone if we didn't...
calcsam · 21 days ago
And I actually hadn't seen that Brex piece so thanks for sharing!!
swyx · 17 days ago
use mastra, get acquired!
dataviz1000 · 21 days ago
I worked with Mastra for three months and it is awesome. Thank you for making a great product.

One thing to consider is that it felt clunky working with workflows and branching logic with non LLM agents. I have a strong preference for using rules based logic and heuristics first. That way, if I do need to bring in the big gun LLM models, I already have the context engineering solved. To me, an agent means anything with agency. After a couple weeks of frustration, I started using my own custom branching workflows.

One reason to use rules, they are free and 10,000x faster, with an LLM agent fallback if validation rules were not passing. Instead of running an LLM agent to solve a problem every single time, I can have the LLM write the rules once. The whole thing got messy.

Otherwise, Mastra is best in class for working with TypeScript.

brap · 21 days ago
I learned that every step that can be solved reasonably without an LLM, should be solved without an LLM. Reliability, cost, performance, etc.

I try to transfer as much work as I can out of LLMs and into deterministic steps. This includes most of the “orchestration” layer which is usually deterministic by nature.

Sprinkle a little bit of AI in the right places and you’ll get something that appears genuinely intelligent. Rely too much on AI and it’s dumb as fuck.

Make their tasks very small and simple (ideally, one step), give them only the context and tools that they need and nothing else, and provide them with feedback when they inevitably mess up (ideally, deterministically), and hope for the best.

calcsam · 21 days ago
Thank you for using us, and for the feedback!

Do you have code snippets you can share about how you wanted to write the rules? Want to understand desired grammar / syntax better.

cesther · 21 days ago
Within that repo is a concise, high signal book by Sam Bhagwat "Principles of Building AI Agents"

https://github.com/mastra-ai/mastra/blob/main/book/principle...

dcreater · 20 days ago
Yes a book is exactly what makes sense in this very immature and fast changing space.

It's especially effective when you litter it around ai company offices in SF

calcsam · 21 days ago
You found the Easter egg!
mrcwinn · 21 days ago
You’re not locked into a model, but you likely are locked in to a platform. This DX and convenience just shifts within the stack where the lock in occurs. Not criticizing - just a choice people should be conscious of.

Another useful question to ask: since you’re likely using 1 of 3 frontier models anyway, do you believe Claude Agent SDK will increasingly become the workflow and runtime of agentic work? Or if not Claude itself, will that set the pattern for how the work is executed? If you do, why use a wrapper?

calcsam · 21 days ago
Re: lessons from coding agents, we're building some of the key abstractions like sandboxes, filesystem, skills/knowledge as Mastra primitives in over the next month.

For any agent you're shipped to production though you probably want a harness that's open-source so you more fully control / can customize the experience.

mrcwinn · 21 days ago
I think that’s fair, totally, but I also think a Skill would be considered a primitive in and of itself by Anthropic. So to me it’s still wrapping an open primitive. Anyway, trade offs.
nsonha · 21 days ago
is "from the Gatsby devs" some how supposed to help the credential? Looks like a cool framework regardless of that.
avaer · 21 days ago
If I had some heartfelt advice for the Mastra devrel team it would be to shut up about about Gatsby.

I'm a happy Mastra user and I'm biased to their success. But I think linking it to an unrelated project is only going to matter to non-technical CXOs who choose technology based on names not merits. And that's not the audience Mastra needs to appeal to to be successful. Good dev tools and techs trickle from the bottom up in engineering organizations.

calcsam · 21 days ago
Thanks for the feedback. We hear from a lot of devs with fond memories of Gatsby but if it cuts the opposite way for you that's also fair!

Most of us spent a lot of the last decade building Gatsby so it's sort of a personal identity/pride thing for us more than a marketing thing. But maybe we need to keep our identity small! Either way, thanks for saying something, worth thinking about.

esperent · 21 days ago
I've been building with Mastra for a couple of weeks now and loving it, so congratulations on reaching 1.0!

It's built on top of Vercel AI elements/SDK and it seems to me that was a good decision.

My mental heuristic is:

Vercel AI SDK = library, low level

Mastra = framework

Then Vercel AI Elements gives you an optional pre built UI.

However, I read the blog post for the upcoming AI SDK 6.0 release last week, and it seems like it's shifting more towards being a framework as well. What are your thoughts on this? Are these two tools going to align further in the future?

https://vercel.com/blog/ai-sdk-6

deepdarkforest · 21 days ago
Never ask a woman her age, a man his salary, and an agent framework developer his long term plans
calcsam · 21 days ago
Have a ton of respect for the AI SDK team. Initially we only used AI SDK model routing, but now we also have our own built-in model routing as well.

I see each of us having different architectures. AI SDK is more low-level, and Mastra is more integrated with storage powering our studio, evals, memory, workflow suspend/resume etc.

esperent · 21 days ago
What a corporate and wishy washy response that just basically repeated what I said back at me.

I was hoping to actually engage with you but I guess you just came here to do marketing.

> AI SDK is more low-level

AI SDK was more low level. My question was, since the latest V6 release is moving towards higher level components, what do you think about that? How will you continue to differentiate your product if Vercel makes moves to eat your lunch?

That's almost certainly their intention here, following their highly successful Next.js playbook: start by creating low level dev tools, gradually expand the scope, make sure that all the docs and setup guides you towards deploying on their infrastructure.

holoduke · 21 days ago
Offtopic but how much is AI used these days for generating code at your place? Curious because we see a major shift last months where almost everything is generated. Still human checked and human quality gates. Big difference compared to last year.
calcsam · 21 days ago
There's the normal stuff you'd expect -- we're all Opus-pilled, use Claude Code, a PR review bot etc. But it's been especially helpful with highly templatized code like our storage adapters, we already have 10-15 working examples which makes the n+1st adapter almost trivial to write.
scirob · 21 days ago
Happy you guys are moving forward. Just to make the comment section a bit more balanced wanted to give my xp. Mastra is by design batteries included and opinionated when you don't fit into an existing pattern it is difficult to keep being productive. In our system several times we have needed to go out and create our own hono routes. We are currently have someone rebuilding everything with langgraph/langchain and will then write a detailed comparison.

For sure lots of good ideas in mastra, and happy that its JS TS first, I think you'll will continue to grow becuase of the TS first approach

asaiyer · 20 days ago
Thanks for the feedback! Our mission with Mastra is to make sure users can get further faster with the standard primitives of AI engineering! We learned over the last year that while we want to be opinionated, users need to be able to eject and adapt Mastra however they want! We added server adapters so users didn’t have to deal with our opinions on esbuild, we made storage more configurable and extendable, and invested more into our workflows engine which will be the foundation for distributed execution in the future! We are always willing to hear feedback and welcome changes! Would love to know what you learn from the langchain comparison and happy to improve Mastra from it! Thanks!