Results summary: Baseline heuristic policy achieves 42% success rate on FetchPush-v4. With memory augmentation
(recall past experiences before each episode), it reaches 67% — a +25pp improvement. Cross-environment transfer
from FetchPush to FetchSlide adds +8pp over baseline.
The API has 7 endpoints — the core loop is:
- learn(insight, context) — store what worked (or failed)
- recall(query) — retrieve relevant past experiences, ranked by text + vector + spatial similarity
- save_perception(data) — store raw trajectories/forces
- start_session / end_session — episode lifecycle with auto-consolidation
Everything runs on SQLite locally. No cloud, no GPU. Works via MCP (Model Context Protocol) or direct Python
import.
pip install robotmem — quick demo runs in 2 minutes.