It's not conceptually a knowledge graph in the same way, but you can introspect essentially everything about your application. However, resources can be given data layers which define how they map to underlying storage, and you could use all of this information only as static information to derive additional things from, or you could just...well, use it. i.e `Ash.read(Resource)` yielding the table data. Our query engine has the same semantics they describe where you don't explicitly join etc.
```elixir MyApp.Post |> Ash.Query.filter(author.type == :admin) |> Ash.read!() ```
You can generate charts and graphs, including things like policy flow charts.
---
Ultimately I've found that modeling tools like UML that can't simultaneously actually execute that model (i.e act as the application itself) are always insufficient and/or have massive impedance mismatches once rubber meets the road. The point is to effectively reimagine this as "what if we use these modeling principles, declaratively, from the ground up".
Edit: The more I read this article the more I hear this voice https://www.youtube.com/watch?v=y8OnoxKotPQ
The core problem seems to be development in isolation. Put another way: microservices. This post hints at microservices having complete autonomy over their data storage and developing their own GraphQL models. The first is normal for microservices (but an indictment at the same time). The second is... weird.
The whole point of GraphQL is to create a unified view of something, not to have 23 different versions of "Movie". Attributes are optional. Pull what you need. Common subsets of data can be organized in fragments. If you're not doing that, why are you using GraphQL?
So I worked at Facebook and may be a bit biased here because I encountered a couple of ex-Netflix engineers in my time who basically wanted to throw away FB's internal infrastructure and reinvent Netflix microservices.
Anyway, at FB there a Video GraphQL object. There aren't 23 or 7 or even 2.
Data storage for most things was via write-through in-memory graph database called TAO that persisted things to sharded MySQL servers. On top of this, you'd use EntQL to add a bunch of behavior to TAO like permissions, privacy policies, observers and such. And again, there was one Video entity. There were offline data pipelines that would generally process logging data (ie outside TAO).
Maybe someone more experienced with microservices can speak to this: does UDA make sense? Is it solving an actual problem? Or just a self-created problem?
GraphQL is great at federating APIs, and is a standardized API protocol. It is not a data modeling language. We actually tried really hard with GraphQL first.
I guess in their world they’d add a new model for whatever they want to change and then phase out use of the old one before removing it.
Versioning is permission to break things.
Although it is not currently implemented in UDA yet, the plan is to embrace the same model as Federated GraphQL, which has proved to work very well for us (think 500+ federated GraphQL schemas). In a nutshell, UDA will actively manage deprecation cycles, as we have the ability to track the consumers of the projected models.