Readit News logoReadit News
Posted by u/staranjeet a year ago
Show HN: Mem0 – open-source Memory Layer for AI appsgithub.com/mem0ai/mem0...
Hey HN! We're Taranjeet and Deshraj, the founders of Mem0 (https://mem0.ai). Mem0 adds a stateful memory layer to AI applications, allowing them to remember user interactions, preferences, and context over time. This enables AI apps to deliver increasingly personalized and intelligent experiences that evolve with every interaction. There’s a demo video at https://youtu.be/VtRuBCTZL1o and a playground to try out at https://app.mem0.ai/playground. You'll need to sign up to use the playground – this helps ensure responses are more tailored to you by associating interactions with an individual profile.

Current LLMs are stateless—they forget everything between sessions. This limitation leads to repetitive interactions, a lack of personalization, and increased computational costs because developers must repeatedly include extensive context in every prompt.

When we were building Embedchain (an open-source RAG framework with over 2M downloads), users constantly shared their frustration with LLMs’ inability to remember anything between sessions. They had to repeatedly input the same context, which was costly and inefficient. We realized that for AI to deliver more useful and intelligent responses, it needed memory. That’s when we started building Mem0.

Mem0 employs a hybrid datastore architecture that combines graph, vector, and key-value stores to store and manage memories effectively. Here is how it works:

Adding memories: When you use mem0 with your AI App, it can take in any messages or interactions and automatically detects the important parts to remember.

Organizing information: Mem0 sorts this information into different categories: - Facts and structured data go into a key-value store for quick access. - Connections between things (like people, places, or objects) are saved in a graph store that understands relationships between different entities. - The overall meaning and context of conversations are stored in a vector store that allows for finding similar memories later.

Retrieving memories: When given an input query, Mem0 searches for and retrieves related stored information by leveraging a combination of graph traversal techniques, vector similarity and key-value lookups. It prioritizes the most important, relevant, and recent information, making sure the AI always has the right context, no matter how much memory is stored.

Unlike traditional AI applications that operate without memory, Mem0 introduces a continuously learning memory layer. This reduces the need to repeatedly include long blocks of context in every prompt, which lowers computational costs and speeds up response times. As Mem0 learns and retains information over time, AI applications become more adaptive and provide more relevant responses without relying on large context windows in each interaction.

We’ve open-sourced the core technology that powers Mem0—specifically the memory management functionality in the vector and graph databases, as well as the stateful memory layer—under the Apache 2.0 license. This includes the ability to add, organize, and retrieve memories within your AI applications.

However, certain features that are optimized for production use, such as low latency inference, and the scalable graph and vector datastore for real-time memory updates, are part of our paid platform. These advanced capabilities are not part of the open-source package but are available for those who need to scale memory management in production environments.

We’ve made both our open-source version and platform available for HN users. You can check out our GitHub repo (https://github.com/mem0ai/mem0) or explore the platform directly at https://app.mem0.ai/playground.

We’d love to hear what you think! Please feel free to dive into the playground, check out the code, and share any thoughts or suggestions with us. Your feedback will help shape where we take Mem0 from here!

jedwhite · a year ago
Congrats on the launch. Adding a memory layer to LLMs is a real painpoint. I've been experimenting with mem0 and it solves a real problem that I failed to solve myself, and we're going to use it in production.

One question that I've heard a few times now: will you support the open source version as a first class citizen for the long term? A lot of open source projects with a paid version follow a similar strategy. They use the open source repo to get traction, but then the open source version gets neglected and users are eventually pushed to the paid version. How committed are you to supporting the open source version long term?

Deleted Comment

AngelaHoover · a year ago
Over time, I can imagine there's going to be a lot of sensitive information being stored. How are you handling privacy?
deshraj · a year ago
We already support the feature of inclusion and exclusion of memories where the developer can control what things to remember vs not remember for their AI app/agent. For example, you can specify something like this:

- Inclusion prompt: User's travel preferences and food choices - Exclusion prompt: Credit card details, passport number, SSN etc.

Although we definitely think that there is scope to make it better and we are actively working on it. Please let us know if you have feedback/suggestions. Thanks!

lionkor · a year ago
An exclusion... prompt? Do you just rely on the LLM to follow instructions perfectly?
weisser · a year ago
Congrats on the launch!

I messed around with the playground onboarding...here's the output:

With Memory Mem0.ai I know that you like to collect records from New Orleans artists, and you enjoy running.

Relevancy: 9/10

Without Memory I don’t have any personal information about you. I don’t have the ability to know or remember individual users. My main function is to provide information and answer questions to the best of my knowledge and training. How can I assist you today?

Relevancy: 4/10

--

It's interesting that "With Memory" is 9/10 Relevancy even though it is 100% duplication of what I had said. It feels like that would be 10/10.

It's also interesting that "Without Memory" is 4/10 — it seems to be closer to 0/10?

Curious how you thinking about calculating relevancy.

soulofmischief · a year ago
This is why in my system I have more specific, falsifiable metrics: freshness, confidence, etc. which come together to create a fitness score at the surface-level, while still exposing individual metrics in the API.
gkorland · a year ago
It looks great! Utilizing Knowledge Graph to store long term memory is probably the most accurate solution compared to using only Vector Store (same as with GraphRAG vs Vector RAG). I think an important thing to point here that long term memory vs RAG doesn't represent the organizational knowledge but the chat history which should be private to the end user and shouldn't be kept in a completely isolated graph than the rest of the users
kaybi · a year ago
How does Mem0 handle the potential for outdated or irrelevant memories over time? Is there a mechanism for "forgetting" or deprioritizing older information that may no longer be applicable?
staranjeet · a year ago
Mem0 currently handles outdated or irrelevant memories by:

1. Automatically deprioritizing older memories when new, contradictory information is added. 2. Adjusting memory relevance based on changing contexts.

We're working on improving this system to give developers more control. Future plans include:

1. Time-based decay of unused memories 2. Customizable relevance scoring 3. Manual removal options for obsolete information

These improvements aim to create a more flexible "forgetting" mechanism, allowing AI applications to maintain up-to-date and relevant knowledge bases over time.

We're open to user feedback on how to best implement these features in practical applications.

asaddhamani · a year ago
I believe AI memory is a very important problem to solve. Our AI tools should get better and more personalised over time.

(I hope it's ok to share something I've built along a similar vein here.)

I wanted to get long-term memory with Claude, and as different tools excel at different use cases, I wanted to share this memory across the different tools.

So I created MemoryPlugin (https://www.memoryplugin.com). It's a very simple tool that provides your AI tools with a list of memories, and instructs them on how to add new memories. It's available as a Chrome extension that works with ChatGPT, Claude, Gemini, and LibreChat, a Custom GPT for ChatGPT on mobile, and a plugin for TypingMind. Think of it as the ChatGPT memory feature, but for all your AI tools, and your memories aren't locked into any one tool but shared across all of them.

This is meant for end-users instead of developers looking to add long-term memory to their own apps.

yding · a year ago
Congrats Taranjeet and Deshraj!

So after using Mem0 a bit for a hackathon project, I have sort of two thoughts: 1. Memory is extremely useful and almost a requirement when it comes to building next level agents and Mem0 is probably the best designed/easiest way to get there. 2. I think the interface between structured and unstructured memory still needs some thinking.

What I mean by that is when I look at the memory feature of OpenAI it's obviously completely unstructured, free form text, and that makes sense when it's a general use product.

At the same time, when I'm thinking about more vertical specific use cases up until now, there are very specific things generally that we want to remember about our customers (for example, for advertising, age range, location, etc.) However, as the use of LLMs in chatbots increases, we may want to also remember less structured details.

So the killer app here would be something that can remember and synthesize both structured and unstructured information about the user in a way that's natural for a developer.

I think the graph integration is a step in this direction but still more on the unstructured side for now. Look forward to seeing how it develops.

deshraj · a year ago
Thanks yding! Definitely agree with the feedback here. We have seen similar things when talking to developers where they want:

- Control over what to remember/forget - Ability to set how detailed memories should be (some want more detailed vs less detailed) - Different structure of the memories based on the use case

Deleted Comment

hammer_ai · a year ago
Looks nice. Would the open-source version work from within an Electron app used for local LLM chats? I.e. can I run the memory management, retrieval, and database locally in javascript?

I believe the answer is "no, you can only run the memory management code in Python, the javascript code is only a client SDK for interacting with the managed solution". In which case, no worries, still looks awesome!

hchua · a year ago
Check out https://github.com/Airstrip-AI/mem0-rest.

Disclaimer: I built it.

Context: We are using mem0 in another open-source project of ours (Typescript) and had the same questions. So we went ahead and built a small api server for ourselves.