I'd also recommend reading up on the awesome pg statistics tables, and leverage them to benchmark your things like index performance and macro call speeds.
Great questions:
1. We currently don't support multi-dimensional arrays, but we plan to add support for such complex data structures.
2. Would you be able to share what type of user-defined functions are these, do they do modify the data or read it?
A couple questions, if you have time:
1. How do you guys handle multi-dimensional arrays? I've had issues with a few postgres-facing interfaces (libraries or middleware) where they believe everything is a 1D array!
2. I saw you are using pg_duckdb/duckdb under the hood. I've had issues calling plain-SQL functions defined on the postgres server, when duckdb is involved. Does BemiDB support them?
Thanks for sharing, and good luck with it!
I think it was used on nVidia Tegra systems, maybe? I'd be interested to find it again, if anyone knows. :)
I've not seen this approach documented much online - but it works really well for me. It has the advantage of keeping all my tables flat, while still being able to encode business logic into the database.
A typical query I might type, would look something like..
`SELECT ,trip_start_date(id) FROM trips WHERE trip2region(id) = 'AU'`
Where the two macros are just simple table joins, saving me the boilerplate work. Where this approach has really shined, though, is through function composition...
`SELECT
FROM trips WHERE loc2region(trip_origin(id)) NOT IN loc2region(location_is_hotel(trip_locations_visited(id)))`So something like the above, which is a mix of scalar and table functions, would give me all the trips where the traveler stayed in hotels outside of their home region. Maybe not the best example, because I don't actually work with trip/hotel/travel data at all, but I'd be interested reading more about this approach. I was surprised how well-optimized the queries are.... \endyap