While in EA (early access), there's no additional cost of Prisma Postgres, you're effectively getting a free Postgres db.
The pricing you are citing is for Accelerate, which does advanced connection pooling and query caching across 300 POPs globally.
All that being said, we'll def address the "can you pls make Prisma Postgres pricing simpler to grok?" question before we GA this thing. Thanks for the feedback!
I think from my perspective there’s some cognitive dissonance when I imagine myself getting the free plan with a cool 60k queries included, and then switching to the paid plan, and finding that that I still have a cool 60k queries included. Wat?
I rationally realize there is a price/usage point where this makes sense, but emotionally it doesn’t feel good.
When I looked at your pricing, this were my thoughts:
1) 60k queries? I burn through that in an hour. All it takes is the Google bot and some shitty AI scraper to come along and crawl our site - which happens every single day.
2) $18 per million? I don't know how many queries I need per day, at the moment, but give 1), I will surely burn through a million dozens of times oer months...
...at which point this thing will be just as expensive as an RDS instance on AWS, potentially even more so if we hit traffic peaks (every single user causes hundreds of queries, if not thousands).
3) I don't even understand who to interpret the egress cost. No idea how to predict what the pricing will be. Maybe some calculator where we can slot in the relevant estimated values would be nice?
I feel like that is always the crux of these solutions.
For example, I used Aurora Serverless v2 in a deployment, and eventually it just made sense to use a reserved instance because the fee structure doesn't make sense.
If I actually scale my app on these infrastructuers, I pay way more. I feel it's only great for products that _arent_ successful.
I always thought serverless meant you could scale out AND lower the cost. It always seems to turn out that serverless is more expensive the more you use it. I guess at certain volume, a serverless instance is meaningless since it’s always on anyway.
Its 8$ per million query, it really is ok. The baseline of 49$ is the price of the general prisma services. For a production workload cloud based this is in the low-end and if you work on product salaries are always the number one cost.
Indeed, as others have mentioned, you get 60k queries for free! Don't even need to add a card. Then, you rather pay for the usage (primarily by number of queries) you have.
The $49 Pro plan you mentioned gives you additional features, such as more projects, higher query limits, and a lower $$ price per million queries.
On the Starter plan though, you can get going for absolutely free, incl. those 60k queries, and only pay for the queries above that.
We are also working on making this simpler to understand. We want to make sure our pricing is as easy to grok and as affordable as possible. Keep an eye out for improvements as we get to GA!
But, I totally agree to your overall statement. The premium for hosted DBs is quite high despite the competition.
Usually, if you want hardware to handle real world production data volumes (not 1 vCPU and 512MB, but more like 4 vCPUs and 8G) you are very soon around $200 to $300. A VPC with that size is around $15?
The hosted solutions are just so damn easy to get started.
They need to cover the dev costs and any bloating their organization might have (no idea about Prisma but lots of these startups are bloated). Eventually, the tech will get democratized and the costs will come down.
yeah the whole "but you need to do maintainance" aspect of using a real server is overblown
OS' are pretty stable these days, and you can containerize on your server to keep environments separate, and duplicate
I guess it just comes with experience, but at the same time, the devops skillsets necessary for dealing with serverless stuff is also totally out of this world. most places I've worked at, marketing hasn't even launched a campaign, there is no product validation about how much traffic you'll get, and you're optimizing for all this scale that's never going to happen
Agreed that existing serverless stacks like lambda are a nightmare. But the real problem is that they don't solve the state management problem. (You need step functions to compose lambdas AND you need a bunch of custom logic for recovery in case lambdas crashes/pause/timeout).
I do hope tomorrow's engineers won't have to learn devops to use the cloud. My team works on what we think is the better way to do serverless, check it out! https://dbos.dev/
> If this is close to cost, is this truly as efficient as it gets?
I saw them hiring Rust devs recently, which makes me feel like they do things efficiently(hopefully). That being said, Serverless is the greed-driven-model, where you start by thinking, "meh, we don't need that many queries/executions/whatever anyways, we will save plenty mulas we'd waste otherwise renting a reserved instance sitting idle most of the time", then something bad happens and you overrun the bill and then go into "sh+t, need to always rent that higher tier, else we risk going bankrupt" and since your stuff is already built, you can no longer change your stuff without another big re-write and fear or breaking things.
Switching out a Postgres provider surely is on the easier side of things to migrate?
With most of these serverless providers, there's no technological lock in I am aware of, it's all just Postgres features paired with DevOps convenience.
What size dataset can you fit on that $5 VPS where it handles those queries in reasonable time? Serious question, all the $5 VPS' I've seen are too low spec to get anywhere with them. Eg a digital ocean $6/mo VPS will get you a single measely gig of RAM and a 25 GiB SSD. Without being more explicit about "realize there more included" is a $5 VPS really even a valid point of comparison?
I don't know why people ever buy plane tickets, walking's free.
1 million requests in a month is ~0.4 requests per second.
With the Prisma pricing, $1k gets you up to a 48req/s load average, and that's without the geo balancing. For a little more you can get a dedicated Postgres instance with 128GB memory and 1TB+ of disk on DO that would definitely handle magnitudes more load.
Of course there are a bunch of trade-offs, but as the original poster said the gap is pretty wide/wild.
Anything with indexes will be completely fine. Hell, your little instance can probably do hundreds of primary key lookups every second. How fast would you burn through your query allowance on Prisma with that?
The point is that when I buy managed postgres, the thing I expect to be paying for is, well, postgres. Not a bunch of geo load balancing that I’m never going to need.
That’s why the comparison is with the thing that actually does what I want.
On hetzner it gets you at least 4 cores, 8GB RAM and 80GB local SSD. For $49 you can almost get a dedicated server with 8 cores and 64GB RAM. More than enough to handle that load Edit: this is for $8 but general point still stands
The Prisma engine is written in Rust (and the original product was written in Scala), so your snide comment is actually a bit inaccurate. You've also ironically failed to spell JavaScript using the correct casing.
I'm not super well versed in this domain, but I believe Postgres columns need to be wrapped in double quotes to respect case, or else they're all treated as lower, or something along those lines?
I can't speak much in detail, but maybe the following will paint you a picture.
I did contract work for a large international financial institution, known for being "one of the big N" (N<5). Lots of data/backend/db work, in several languages/stacks. Then a new style/naming convention for databases got pushed, by middle/higher management. It included identifiers in both camel-case and pascal-case. It was clearly "designed" by somebody with a programming background in languages that use similar conventions.
I noticed how there would be trouble ahead, because databases have (often implicit) naming conventions of their own. Not without reason. They have been adopted (or "discovered") by more seasoned database engineers, usually first and foremost as for causing the least chance of interoperability issues. Often it is technically possible to deviate from them (your db vendor XYZ might support it), but the trouble typically doesn't emerge on the database level itself. Instead it is tooling and programming languages/frameworks on top of it, where things start to fall apart when deviating from the conventional wisdom of database naming conventions.
That also happened with that client. Turned out that the two major languages/frameworks/stacks they used for all their in-house projects (as well as many external product/services), fell apart on incompatibility with the new styling/naming conventions. All internal issues, with undocumented details (lots of low-level debugging to even find the issues). I already had predicted it beforehand, saw it coming, reported it, but got ignored. Not long after, I was "let go". Maybe because of tightened budgets, maybe because several projects hit a wall (not going anywhere, in large part because of the above mentioned f#-up). I'm sure the person who original caused the situation still got royally paid, bonuses included, regardless.
Anyways, the moral of the story here is this: even if you technically could deviate from well established database naming conventions, you can get yourself in a world of hurt if you do. Also if it appears to resolve naming inconsistencies with programming languages of choice.
This has nothing to do with Windows. EF Core just happens to be best-in-class ORM, unlike the "ORMs" on other platforms which give the concept bad rep.
The camel-casing makes life hard because any manual queries you write, you need to quote the names AND uppercase certain letters, as well as the fact its just inconsistent.
I was a big promoter of Prisma but can no longer recommend it. They built something really cool but have basically abandoned it, with major issues languishing for years without attention.
I guess they were busy working on this instead... for now.
Did you know it’s impossible to set a statement timeout for your DB connections in prisma? There’s no hook to run commands when a connection in a pool is established, and there’s no exposed setting for it. The only way to manage it is either to set it at the user level or to set up whatever they call a middleware layer (client extension?) that issues the command to set the timeout before every single query
An engine that doesn’t allow you to set per-connection settings effectively is pretty crazy IMO.
It's possible now with the built-in "db adapter" plugin. I also have lots of misgivings about the Prisma ORM, but this particular thing is possible now.
One of the biggest thing that surprised me is that there's a binary written in Rust that listens for your queries and then passes them to the underlying database.
That's unnecessary moving part IMHO. Is it still the case or they changed their architecture recently?
There were ample reasons in the past whereby going down this path made architectural sense. The primary one being multi-language support. Since then, TS and JS have found their way to the top of "the programming langs of choice" charts and so we're digging into removing the Rust based components and making them optional.
The challenge, as we see it, is not that we didn't address open issues (we've put out a release every 3 weeks for the past 4+ years!), the challenge instead is that we didn't explain how we pick the issues that we do to work on. That is being addressed and you'll soon see us share our process transparently.
With close to 400K monthly active developers using our library and over 9M monthly downloads on NPM, one can imagine that the issues keep piling up!
Honestly, I hated Prisma for a while. I've tried to actively rip it out of multiple projects. But, typed queries + views being supported have rally started to change my mind. Prisma is great for basic CRUD operations, and those two features give me a really solid, type safe escape hatch for most of my complaints.
Dropping the rust client will solve another big complaint. I definitely feel the issues languishing problem. I've submitted a few confirmed reproducible bugs that have hung out for a couple years. Still, I'm happier with their recent direction than I would have expected.
It's not every day you get to launch a hosted Postgres service that has something fundamentally new to offer. That's what we have done with Prisma Postgres, and I'm incredibly excited for it.
We are using Firecracker and unikernals to deliver true scale-to-zero without cold-starts. Happy to go into more detail if anyone is interested.
You ommited another fairly known serverless Postgres provider which also does that (scale to 0 and no coldstart):
Nile (thenile.dev).
Not affiliated in any way with Nile, just a happy user.
I find their pricing much easier to reason about and plan, something I found super cluttered and hard to reason about on your pricing page.
Competition in the serverless Postgres space is always welcome from a customer perspective, but my gripe is currently a) bundling with Prisma - I might not want to use your tool and b) cluttered pricing.
The point re: pricing explanation is well taken. We've already done a revision and will work on another one as we get more feedback on the latest version.
Wow, I thought you guys were just reselling Neon like some others. This is genuinely impressive technically. It's got me looking at Unikraft Cloud for other stuff too.
That said, do you plan do offer branching or any other features that Neon offer? I think that's their big selling point along with separate billing for compute and storage.
Prisma team member here... Yes, when we go GA, we'll offer features that'll give you the comfort of wanting to run your production loads on Prisma Postgres!
Essentially, you pay for database queries and events, with 60'000 included for free, which is plenty for experimenting and small projects. Price per million queries/events is then based on the plan you're subscribed to, and with Starter you have zero monthly fixed costs and only pay for queries and events above 60'000.
No CPU-time and similar that's usually hard to grok.
Take a look at the Accelerate and Pulse pricing details. Prisma Postgres comes bundled with these, so the pay-as-you-go pricing is the same: https://www.prisma.io/pricing#accelerate
We'll continue to make improvements to the pricing on the way to General Availability to make it both as easy to understand and affordable as possible.
Maybe they can take this opportunity to close the feature gap with other ORM: no support for partial index, no support for partition, bad support of JSON column, no support "for update", no support for "now()", poor query performance.
Thanks for the list of things you'd like to see added or fixed, specifics are always much appreciated, so we can better understand!
Historically, we haven't been very good at explaining how we pick the issues we work on. That is being addressed and you'll soon see us share our process transparently.
This will allow everyone to better understand how we're going to continue improving the Prisma ORM.
Are you folks planning on more first class postgres support? From the outside, it really seems like forcing MongoDB into the orm has forced a pretty watered down experience for the main orm (I.e. not typed SQL).
I don't know if it's true, but it seems like you're be able to address the backlog more easily of you didn't have to force abstractions that work for a no SQL db
How does it handles backups, read replicas, failover? And more importantly - how it scales with load? For example our workload fluctuates between 1 vCPU to 128 vCPU in an hour. How it would handle this?
During EA (early access), we don't recommend using PPG (Prisma Postgres) for production loads. For now (during EA), there's no ability to scale up the base system config.
However, when we roll out the service in GA, you'll be able to "upgrade" the base system and one of the plan tiers will support autoscaling along the lines of what you've described.
The EA launch is for us to get PPG into the hands of our users and to vehemently listen to requests/requirements/bugs... your request has been noted and thanks for raising it!
Last time I checked, Firecracker didn't have a very compelling I/O story, which made it in my opinion not completely adequate for running Postgres (or any other database).
In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).
> Firecracker didn't have a very compelling I/O story
Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.
The major tradeoff with firecracker is a reduction in runtime performance for a quick boot time (if you actually need that - this obviously doesn't work if your app takes seconds to boot). There are quite a lot of other tradeoffs too like 'no gpu' because that needs some of the support that they remove to make things boot fast. That's why projects like 'cloud hypervisor' exist.
It absolutely is -- to be fair the previous versions of Unikraft weren't quite easy or maybe ready for wide consumption, but they took some funding and at the very least their marketing and documentation massively improved.
Hugely impressive.
A little weird that they say "no cold start" versus... "minimal cold start".
Cold starts in milliseconds != no cold starts, though I get it -- marketing is marketing and it's not wrong enough to be egregious :)
That said, super excited that someone has built a huge complex database like Postgres on Unikraft.
Been a while since I kicked the tires on Unikraft but looks like it's time to do it again, because this isn't the only software that could use this model, given an effective unikernel stack.
So nice to hear some of you just as excited about unikernels as we are!
Re: zero/minimal cold-start... Technically, you're right, though I'd say if you don't notice it's there, it's as good as not even being there. :) You get the pragmatism though, appreciate it.
Lots of cool stuff coming for Prisma Postgres that all this tech enables, looking forward to keep telling you all about them.
My $5 VPS can handle more queries in an hour. Like, I realize there’s more included, but…
Is it truly impossible to serve this stuff somewhere closer to cost? If this is close to cost, is this truly as efficient as it gets?
The pricing you are citing is for Accelerate, which does advanced connection pooling and query caching across 300 POPs globally.
All that being said, we'll def address the "can you pls make Prisma Postgres pricing simpler to grok?" question before we GA this thing. Thanks for the feedback!
I rationally realize there is a price/usage point where this makes sense, but emotionally it doesn’t feel good.
Start the plan at $59 and include 2M queries.
1) 60k queries? I burn through that in an hour. All it takes is the Google bot and some shitty AI scraper to come along and crawl our site - which happens every single day.
2) $18 per million? I don't know how many queries I need per day, at the moment, but give 1), I will surely burn through a million dozens of times oer months...
...at which point this thing will be just as expensive as an RDS instance on AWS, potentially even more so if we hit traffic peaks (every single user causes hundreds of queries, if not thousands).
3) I don't even understand who to interpret the egress cost. No idea how to predict what the pricing will be. Maybe some calculator where we can slot in the relevant estimated values would be nice?
For example, I used Aurora Serverless v2 in a deployment, and eventually it just made sense to use a reserved instance because the fee structure doesn't make sense.
If I actually scale my app on these infrastructuers, I pay way more. I feel it's only great for products that _arent_ successful.
We are also working on making this simpler to understand. We want to make sure our pricing is as easy to grok and as affordable as possible. Keep an eye out for improvements as we get to GA!
https://www.prisma.io/pricing
60k are included.
But, I totally agree to your overall statement. The premium for hosted DBs is quite high despite the competition.
Usually, if you want hardware to handle real world production data volumes (not 1 vCPU and 512MB, but more like 4 vCPUs and 8G) you are very soon around $200 to $300. A VPC with that size is around $15?
The hosted solutions are just so damn easy to get started.
Often these types of SaaS are hyper cloud backed so their own costs tend to be high
Don’t know whether that’s the case here. Agreed though that pricing also raised my eyebrows
Honestly I'm surprised they lasted this long.
OS' are pretty stable these days, and you can containerize on your server to keep environments separate, and duplicate
I guess it just comes with experience, but at the same time, the devops skillsets necessary for dealing with serverless stuff is also totally out of this world. most places I've worked at, marketing hasn't even launched a campaign, there is no product validation about how much traffic you'll get, and you're optimizing for all this scale that's never going to happen
I do hope tomorrow's engineers won't have to learn devops to use the cloud. My team works on what we think is the better way to do serverless, check it out! https://dbos.dev/
I saw them hiring Rust devs recently, which makes me feel like they do things efficiently(hopefully). That being said, Serverless is the greed-driven-model, where you start by thinking, "meh, we don't need that many queries/executions/whatever anyways, we will save plenty mulas we'd waste otherwise renting a reserved instance sitting idle most of the time", then something bad happens and you overrun the bill and then go into "sh+t, need to always rent that higher tier, else we risk going bankrupt" and since your stuff is already built, you can no longer change your stuff without another big re-write and fear or breaking things.
With most of these serverless providers, there's no technological lock in I am aware of, it's all just Postgres features paired with DevOps convenience.
I don't know why people ever buy plane tickets, walking's free.
With the Prisma pricing, $1k gets you up to a 48req/s load average, and that's without the geo balancing. For a little more you can get a dedicated Postgres instance with 128GB memory and 1TB+ of disk on DO that would definitely handle magnitudes more load.
Of course there are a bunch of trade-offs, but as the original poster said the gap is pretty wide/wild.
The point is that when I buy managed postgres, the thing I expect to be paying for is, well, postgres. Not a bunch of geo load balancing that I’m never going to need.
That’s why the comparison is with the thing that actually does what I want.
(I say that with appreciation -- I always think of the former as UpperCamelCase because I never used Pascal)
https://en.wikipedia.org/wiki/Camel_case
That's because the CREATE TABLE statement creates a table named "camelcase", not "CamelCase", despite what you might assume from the query.
I did contract work for a large international financial institution, known for being "one of the big N" (N<5). Lots of data/backend/db work, in several languages/stacks. Then a new style/naming convention for databases got pushed, by middle/higher management. It included identifiers in both camel-case and pascal-case. It was clearly "designed" by somebody with a programming background in languages that use similar conventions.
I noticed how there would be trouble ahead, because databases have (often implicit) naming conventions of their own. Not without reason. They have been adopted (or "discovered") by more seasoned database engineers, usually first and foremost as for causing the least chance of interoperability issues. Often it is technically possible to deviate from them (your db vendor XYZ might support it), but the trouble typically doesn't emerge on the database level itself. Instead it is tooling and programming languages/frameworks on top of it, where things start to fall apart when deviating from the conventional wisdom of database naming conventions.
That also happened with that client. Turned out that the two major languages/frameworks/stacks they used for all their in-house projects (as well as many external product/services), fell apart on incompatibility with the new styling/naming conventions. All internal issues, with undocumented details (lots of low-level debugging to even find the issues). I already had predicted it beforehand, saw it coming, reported it, but got ignored. Not long after, I was "let go". Maybe because of tightened budgets, maybe because several projects hit a wall (not going anywhere, in large part because of the above mentioned f#-up). I'm sure the person who original caused the situation still got royally paid, bonuses included, regardless.
Anyways, the moral of the story here is this: even if you technically could deviate from well established database naming conventions, you can get yourself in a world of hurt if you do. Also if it appears to resolve naming inconsistencies with programming languages of choice.
Welcome to quoting everything for the rest of your query life.
I guess they were busy working on this instead... for now.
An engine that doesn’t allow you to set per-connection settings effectively is pretty crazy IMO.
That's unnecessary moving part IMHO. Is it still the case or they changed their architecture recently?
There were ample reasons in the past whereby going down this path made architectural sense. The primary one being multi-language support. Since then, TS and JS have found their way to the top of "the programming langs of choice" charts and so we're digging into removing the Rust based components and making them optional.
We'll share more on this in the coming weeks.
With close to 400K monthly active developers using our library and over 9M monthly downloads on NPM, one can imagine that the issues keep piling up!
To be fair this kind of makes sense, especially when the thing that makes many tasks difficult is finding solutions that don't break existing usage.
Dropping the rust client will solve another big complaint. I definitely feel the issues languishing problem. I've submitted a few confirmed reproducible bugs that have hung out for a couple years. Still, I'm happier with their recent direction than I would have expected.
We are using Firecracker and unikernals to deliver true scale-to-zero without cold-starts. Happy to go into more detail if anyone is interested.
Nile (thenile.dev).
Not affiliated in any way with Nile, just a happy user.
I find their pricing much easier to reason about and plan, something I found super cluttered and hard to reason about on your pricing page.
Competition in the serverless Postgres space is always welcome from a customer perspective, but my gripe is currently a) bundling with Prisma - I might not want to use your tool and b) cluttered pricing.
The point re: pricing explanation is well taken. We've already done a revision and will work on another one as we get more feedback on the latest version.
That said, do you plan do offer branching or any other features that Neon offer? I think that's their big selling point along with separate billing for compute and storage.
I’m a bit confused about the pricing.
The docs and pricing pages on your website don’t seem to outline how the pay-as-you-go pricing will work.
Is this still being figured out?
Take a look at the Accelerate and Pulse pricing details. Prisma Postgres comes bundled with these, so the pay-as-you-go pricing is the same: https://www.prisma.io/pricing#accelerate
We'll continue to make improvements to the pricing on the way to General Availability to make it both as easy to understand and affordable as possible.
Historically, we haven't been very good at explaining how we pick the issues we work on. That is being addressed and you'll soon see us share our process transparently. This will allow everyone to better understand how we're going to continue improving the Prisma ORM.
I don't know if it's true, but it seems like you're be able to address the backlog more easily of you didn't have to force abstractions that work for a no SQL db
However, when we roll out the service in GA, you'll be able to "upgrade" the base system and one of the plan tiers will support autoscaling along the lines of what you've described.
The EA launch is for us to get PPG into the hands of our users and to vehemently listen to requests/requirements/bugs... your request has been noted and thanks for raising it!
Deleted Comment
In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).
[1]: https://github.com/cloud-hypervisor/cloud-hypervisor
Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.
Very much of the opinion that the Unikernel stuff (and especially what UniKraft are offering) is being massively slept on.
Hugely impressive.
A little weird that they say "no cold start" versus... "minimal cold start".
Cold starts in milliseconds != no cold starts, though I get it -- marketing is marketing and it's not wrong enough to be egregious :)
That said, super excited that someone has built a huge complex database like Postgres on Unikraft.
Been a while since I kicked the tires on Unikraft but looks like it's time to do it again, because this isn't the only software that could use this model, given an effective unikernel stack.
Re: zero/minimal cold-start... Technically, you're right, though I'd say if you don't notice it's there, it's as good as not even being there. :) You get the pragmatism though, appreciate it.
Lots of cool stuff coming for Prisma Postgres that all this tech enables, looking forward to keep telling you all about them.