My question is, what’s the value of explicitly storing semantics as triples when the LLM can infer the semantics on runtime?
Efficient, precise retrieval through graph traversal patterns that flat text simply can't match ("find all X related to Y through relationship Z")
Algorithmic contradiction detection by matching subject-predicate pairs across time, which LLMs struggle with across distant contexts
Our goal is also to make assistant more proactive, where triplets make pattern recognition more easy and effective
what do you think about these?
https://github.com/RedPlanetHQ/core/blob/main/README.md You can check in our readme on how to use mcp server