Exa, Parallel and a whole bunch of companies doing information retrieval under the "agent memory" category belong to this discussion.
Exa, Parallel and a whole bunch of companies doing information retrieval under the "agent memory" category belong to this discussion.
The other option is to evolve static python into such a language. Looking forward to the PEP that proposed DSLs in Python.
Wouldn't it be good if recursive Leiden and cypher was built into an embedded DB?
That's what I'm looking into with mcp-server-ladybug.
This made sense for product catalogs, employee dept and e-commerce type of use cases.
But it's an extremely poor fit for storing a world model that LLMs are building in an opaque and probabilistic way.
Prediction: a new data model will take over in the next 5 years. It might use some principles from many decades of relational DBs, but will also be different in fundamental ways.
GQL-SQL - for queries.
GraphQL, more for REST??
https://www.tigergraph.com/glossary/cypher-query-language/https://www.tigergraph.com/blog/the-rise-of-gql-a-new-iso-st...
Has some history behind it.
Syntax and some queries:
https://github.com/opengql/grammar/tree/main/samples
Full specification costs you about $270
But the other graph query language "Cypher" always seemed a lot more intuitive to me.
Are they really trying to solve such different problems? Cypher seems much more flexible.
GraphQL was designed to add types and remote data fetching abstractions to a large existing PHP server side code base. Cypher is designed to work closer to storage, although there are many implementations that run cypher on top of anything ("table functions" in ladybug).
Neo4j's implementation of cypher didn't emphasize types. You had a relatively schemaless design that made it easy to get started. But Kuzu/Ladybug implementation of cypher is closer to DuckDB SQL.
They both have their places in computing as long as we have terminology that's clear and unambiguous.
Look at the number of comments in this story that refer to GraphQL as GQL (which is a ISO standard).
Store your graphs in Parquet files on object storage or DuckDB files and query them using strongly typed Cypher. Advanced factorized join algorithms (details in a VLDB 2023 paper when it was called Kuzu).
Looking to serve externalized knowledge with small language models using this infra. Watch Andrej Karpathy's Cognitive Core podcasts more details.
https://www.gqlstandards.org/ is an ISO standard. The Graph Database people don't love search engine results when they're looking for something.
I maintain a graph database where support for GQL often comes up.
Anyone using it in prod even with the beta status?