I think filesystems are the right abstraction for agent memory. Not vector databases, not key-value stores, not custom memory APIs.
Why? Because agents already know how to use files. Claude Code writes notes to ~/.claude/. Cursor stores context in project files. Every coding agent that works well has converged on the same pattern: just use files. The model doesn't need to learn a new API, and you don't need a retrieval pipeline.
The problem is that real filesystems don't work in production.
agent-vfs gives each user a virtual filesystem backed by a single database table. 11 operations (read, write, edit, ls, grep, glob, etc.), SQLite for local dev, Postgres for production. Works with the Vercel AI SDK, OpenAI SDK, and Anthropic SDK out of the box.
npm install agent-vfs
I've been working on a slightly different angle with MemoryLane (https://github.com/deusXmachina-dev/memorylane) - instead of giving agents a place to write their own memories, it captures the user's screen activity and makes it queryable. So the agent gets context about what the human was doing, not just what the agent itself did. It plugs in via MCP so Claude Code / Cursor can just ask it stuff.
I think there's something interesting in combining both - agent-vfs for the agent's own state, and something like MemoryLane for the human side. How do you think about that boundary between what the agent remembers vs what it knows about the user?
We need to use vector DBs just because of the amount of data. But on a different layer we want to help create file-based instrucations/skills for patterns that we detect and think can be automated.