https://docs.aws.amazon.com/redshift/latest/dg/t_loading-tab...
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...
https://docs.aws.amazon.com/redshift/latest/dg/t_loading-tab...
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...
[1]: https://supabase.com/docs/guides/database/webhooks [2]: https://supabase.com/docs/guides/functions [3]: https://supabase.com/docs/guides/storage/schema/design
I think if there was a tightly integrated framework for managing the state of all of these various triggers, views, functions and sproc through source and integrating them into the normal SDLC it would be a more appealing sell for complex projects
You can then take it a step further but opting-in to use Branching [2] to better manage environments. We just opened up the Branching feature to everyone [3].
[1]: https://supabase.com/docs/guides/cli/local-development#datab... [2]: https://supabase.com/docs/guides/platform/branching [3]: https://supabase.com/blog/branching-publicly-available
[1]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio...
In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code.
One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement
We are transparent with what's the level of compatibility - https://supabase.com/docs/guides/storage/s3/compatibility
The most often used APIs are covered but if something is missing, let me know!
Then for GDPR, when you delete a user, the associated storage can be deleted.
One could cobble this together with triggers, some kind of external process, and probably repetititious code so there is one table of metadata per "owning" id, although it would be nice to be packaged.
The source of truth also matters here - if it's the database or the underlying s3 bucket. I think having the underlying storage bucket to be the source of truth would be more useful. In that scenario we would sync the metadata in the database to match what's actually being stored and if we notice metadata of a object missing, we add that in as opposed to deleting the object in storage. This would make it easier for you to bring in your own s3 bucket with existing data and attach it to Supabase storage.
I wish they would offer a plan with just the pg database.
Any news on pricing of Fly PG?
We are actively working on our Fly integration. At the start, the pricing is going to be exactly the same as our hosted platform on aws - https://supabase.com/docs/guides/platform/fly-postgres#prici...
Supabase is the Postgres Development Platform and we are looking for Product Managers and Technical Program managers. You will be working with very strong Product Engineers across a wide variety of products (Postgres, realtime, storage, Queues, etc). If you enjoy working on developer tools and like to get your hands dirty, check out our open product roles
- Product Manager https://jobs.ashbyhq.com/supabase/74542052-f648-48fb-a8fe-a8...
- Technical Program Manager https://jobs.ashbyhq.com/supabase/b83c7316-77ce-49a8-a199-9f...
We are also hiring for other engineering and growth roles - https://supabase.com/careers