I LOVE Prisma. I’ve used Django, SQLAlchemy, Sequelize, Knex, and TypeORM in the past. all had rough edges that continually frustrated me or didn’t provide the functionality i needed.
Prisma is different. It’s absolutely got rough edges, but the extremely strong type safety makes Sequelize look like a joke. The query engine itself, written in rust, combines and optimizes queries inside every tick of the event loop so GraphQL N+1 issues are a thing of the past.
Also, the team and community behind it are amazing! I never thought having an active dev community behind an ORM would be important, but as the author of Sequelize-Typescript was forced to abandon it late last year and the author of TypeORM was also pretty much absent, Prisma was a breath of fresh air. I REALLY hope they can find a way to build a sustainable business out of it. Support packages, feature development contracts, something to keep them financially incentivized to keep making it better.
Happy to answer any questions about my experience using it if anyone has any.
To be fair, Sequelize makes itself look like a joke. I normally wouldn't call out an individual library to shit on, because I understand that a lot of work has gone into it.
But I used it for years and it was always buggy. Sometimes options wouldn't work correctly because the authors liked to do that clever JavaScript thing where they write `foo = optional_thing || default_value` even for BOOLEAN options. I always had issues with it not being able to truncate/drop tables correctly that had foreign keys (it didn't attempt to order them intelligently), some of the options weren't compatible with each other even though they were orthogonal (I'm thinking about the snake_casing options and the created_at/updated_at column features), etc, etc.
It just... didn't actually work. Over many versions. I think I used it from late 3.x somewhere through 5.x.
> I normally wouldn't call out an individual library to shit on, because I understand that a lot of work has gone into it.
I don't think I've ever called out on a library, or not felt truly thankful that it was there to help me. But Sequelize... man...
When I first got into Node, and got Sequelize, after working for years with .NET, Entity Framework, NHibernate and the likes, it just felt like horrible-everything. I've forced myself to use it in some projects, because node is cheap, and I kept thinking that I'm missing something. Some brilliance behind the questionable... everything. No. I can't even bring myself to think about that design mess. Sorry for the rant, Sequelize makes me feel insecure, and little, in the chaos of the Universe.
Prisma 2 is a delight to use. Gave it a try after typeorm-model-generator’s author suggested using something else than TypeORM. Prisma blows other JS/TS ORM libraries out of the water. It integrates so well with VS Code
Huh, care to explain this one? I’m using Prisma with Apollo Server without doing anything fancy in my resolvers. I just assumed I’m getting N+1 issues but didn’t bother to optimize yet.
in short the separate engine process allows them to combine every findX call you make during one tick of the event loop into a single SQL query, following the dataloader pattern, so you don’t have to implement it yourself. i’m sure @nikolasburk can shed some more light if you’re interested.
Having used in the past both Django and Rails ORMs, I totally agree with this. Prisma is awesome, it's been the best experience I've ever had so far with SQL databases.
Laravel has Eloquent and it’s an absolute delight to use. I’ve used Rails and Django, and I always felt as if I was fighting the ORM, where as with Eloquent everything just seems natural.
I am a former PHP developer, absolutely loved Laravel and adored Eloquent as the ideal ORM. I switched to Node/JavaScript in 2015 and have been chasing a good ORM since, nothing could ever compare with Eloquent in Nodeland, I all but gave up and started building my own, in TypeScript to match Eloquent as much as possible.
Then Nikolas Burk reached out to me and did his absolute best to convert me, but I was stuck in my ways, it HAD to behave like Eloquent, or it was not good enough and that DSL layer? No thanks, I don't like it.
Started writing code for my own ORM when I was like "What the heck am I doing? this gets me nowhere" And messaged Nikolas back saying I was dropping everything and planned to give Prisma a real solid try.
I'm so glad I did. I LOVE it now. I consider it a part of my GOAT stack.
You can disable caching if that's a problem for your application. But the idea is that DataLoader simply holds onto promises that requested an entity, and fires them all in batches according to a scheduler function. In Node.js, the default is to use scheduler magic (relies on how the event loop works).
> I do find it confusing I always thought Prisma was GraphQL related.
This is a common misconception that stems from our history as a company and being early contributors to the GraphQL ecosystem. With the move to Prisma 2 however, there is no native GraphQL layer in Prisma any more. I've talked about this extensively in a recent livestream on Youtube [1] if you want to learn more :) There's also this article "How Prisma and GraphQL fit together" [2] that explains the historic dimensions of this if you're interested.
> It's unclear what's the pros are of Prisma compared to TypeORM.
I guess you could argue that one benefit is the superior type-safety Prisma provides [3]. Other folks have also called out that they prefer way how data is modeled with Prisma (via the Prisma schema), the migration system as well as the active maintenance, regular releases, the active community, the support and thorough documentation of Prisma. Ultimately it'll come down to your personal preference though which one is the more appropriate for your project :)
I don't like ORMs, but Prisma is one of the better ones. It avoids a lot of shortcomings of popular ORMs. I hope they find a way to make it work as a company. I've been following them over the years from the beginning of their journey through various pivots.
To use an ORM effectively you must learn both the API of the ORM and SQL. It does not absolve you from learning SQL, which I think was one of the unstated attractions to junior developers. Then you have to map between the API of the ORM and how it translates that to SQL under the hood. Then you have to understand how it handles caching and sessions and how that all works under the hood. This has been the source of so much complexity and so many bugs over the years that I no longer think the benefits are worth the cost. As always, there are exceptions and caveats, but I now believe it's better to use SQL directly than to use an ORM. In general there is too much complexity in all the layers of abstraction in software development these days, and I think the industry is strangely blind to the consequences of this. Complexity is death to software, and it should not be taken on lightly. The entire craft of a software engineer boils down to eliminating complexity and simplifying problems, the better you can do that, the more productive an engineer you will be.[1]
> I think the industry is strangely blind to the consequences of this
Well, I think if you're not blind to those things to some degree, you're stuck forever in a best-practices search, which I feel like more often than not ends up implying you have to rewrite your stack.
For example, I do most of my development thru Hasura now. It writes insanely performant SQL queries translated directly from GraphQL. As a data consumer, you only have to worry about the shape of the data you want, and none of the implementation details. There is one endpoint, and you pass it queries against the schema it exposes. Implementation details are abstracted heavily here, but in a "good" separation of concerns way.
I will never hand-write SQL again. Why would I? My front-ends can load everything in my data model expressively and with type safety thanks to graphql-codegen.
Now imagine you read this and realize you work in a completely different method and this answers some of your problems. You may start parallel development, but you certainly can't stop forward progress just to evaluate your stack. So I think the blinders are key to progressing in the short term despite the productivity losses you're taking in the long term, and then when you phrase it like this it is surely no surprise companies behave like that.
> Well, I think if you're not blind to those things to some degree, you're stuck forever in a best-practices search, which I feel like more often than not ends up implying you have to rewrite your stack.
Can you expand on that? The chain of reasoning is probably clear to you, but I can't follow it.
> For example, I do most of my development thru Hasura now. It writes insanely performant SQL queries translated directly from GraphQL. As a data consumer, you only have to worry about the shape of the data you want, and none of the implementation details. There is one endpoint, and you pass it queries against the schema it exposes. Implementation details are abstracted heavily here, but in a "good" separation of concerns way.
I like Hasura a lot, I've advocated it at a previous company I worked for, and after many months of meetings I got it approved to add to our stack - which was then never prioritized, but that's another story.
> I will never hand-write SQL again. Why would I? My front-ends can load everything in my data model expressively and with type safety thanks to graphql-codegen.
I don't think writing GraphQL is superior to writing SQL, and again you have to think not just in terms of GraphQL but also in terms of how Hasura translates that to SQL under the hood. I am slightly biased though, because I'm creating on a framework to query the db using SQL from the frontend, I'm curious how you think that would compare with Hasura: https://github.com/sqljoy/sqljoy
> Now imagine you read this and realize you work in a completely different method and this answers some of your problems. You may start parallel development, but you certainly can't stop forward progress just to evaluate your stack.
I'm not really sure how this connects to my argument about complexity, although I think the link is clear to you, can you expand on that?
> I now believe it's better to use SQL directly than to use an ORM
I reached the same conclusion. Everything is so much simpler that way. I ask the database for the data, the database gives it to me. That's the end of it.
I used to like highly abstract libraries but the truth is they provide the lowest common denominator in features at a huge cost in complexity. Now I'd rather work as closely to the implementation as possible. I feel like I actually understand how stuff works now.
ORMs are great when you have to update large graphs of objects. They are not great for querying. The performance of querying is so unpredictable in an ORM that it's better to handle these read queries by hand. Things that tend to screw up performance are the N+1 query problem and generated SQL causing bad query plans. One needs to get to know one's database query planner in order to get the best performance, and that requires dealing with the raw SQL.
"The ORM saves us time working, but they don't save us time learning." Paraphrasing Joel Spolsky. Also, I hate every single time he is right, which at this point is daily.
PureORM[1] is an implemented thought experiment on what a "pure" object mapping library would look like - where you write regular native SQL and receive back properly structured (nested) pure business objects.
It would contrast against traditional ("stateful") ORMs which use query builders (rather than raw SQL) to return database-aware (rather than pure) objects.
The name pureORM reflects both that it is pure "ORM" (there is no query builder dimension) as well as the purity of the mapped Objects.
I do agree with the argument that you need to understand both, the ORM and SQL. The really annoying part is that you also need to understand how the ORM translates queries into SQL to some degree.
I find ORMs very convenient and useful for simple queries, and especially for inserts/updates with many-to-many relations. And that is a very large part of a typical application. But you can very quickly meet the limits of an ORM with more complex queries, and I find it easier to just step down to SQL in those cases before learning the more complex parts of the ORM.
I despise ORMs. I don't judge programmers very often, but invariably if there is an ORM, there is a fragile, tightly coupled, difficult-to-maintain project nearby
I have adopted Prisma in my latest project and I have mixed feelings about it.
- The generated client is top-tier. Fully specified in TypeScript with intelligent types that can be extended by the user.
- The schema language is great. It provides a cohesive experience that can fully express your database structure, and it also provides a migration framework to manage that structure's evolution.
- There's no hook for implementing access control at the model level, so you end up needing to create a higher-level API around the base queries to implement these yourself, which is a fairly large investment. I'd love to see a pair of functions that can modify the query before it gets sent to the Prisma engine to add extra query values, and a second function that can filter models before they are returned to the caller.
Where Prisma falls short is in testing. The testing story in Prisma is that you can either use a live database, or you can completely stub out all calls to Prisma in your application. The latter means you end up writing a bunch of tests for implementation rather than for behavior, or you end up manually writing an in-memory database that fits the Prisma API. The former means that you need to completely recreate your database after every test case.
The discussions on the Github seem to indicate that the old "wrap the test in a transaction" trick is forbidden, and Prisma seems architecturally set up to ensure that you can't even hack this behavior in.
All in all, we probably won't switch away from it, but I will continue to look for a better way to test my endpoints.
Thanks for sharing your thoughts. The lack of Client extensibility is definitely a shortcoming right now. We're thinking of ways to wrap the Client a bit better. Middleware works is a decent workaround, but doesn't fit all these use cases.
On testing, we're working on some documentation right now on how to mock the Prisma Client. I could see us generating a mockable client to help with this.
Spinning up a temporary e.g. postmaster instance for a test off a pre-built data directory (bonus: use zfs or btrfs or similar snapshots to make this even faster) is, IME, while still effectively "completely recreate" also both much more robust than wrap-in-transaction and generally not the slowest thing about your tests (and of course allows reliable parallelisation).
Bit of setup effort but well worth it IME even in situations where your tooling isn't strongly pushing you towards the approach.
While depending on a live DB for testing isn't great, why not just blow away database state after every test by running TRUNCATE / DELETE against all tables? DELETE especially is very fast (couple ms) if you're not inserting large amounts of data during your test runs.
2) It is at least an order of magnitude slower than not using a database at all.
Combined this makes for very slow tests. Certainly for larger applications with thousands of tests.
Having said that, I don't think there is a single right answer to the problem of testing applications that use a database and your suggestion still can be a valid solution.
Hard disagree. If you’re not leveraging the features of your SQL database to make your life easier, your queries faster, and keep your data consistent & valid, then you probably didn’t need SQL to begin with.
Which, yes, is a common state for many applications to be in. Wresting with SQL while using it very poorly in an effort to act like they’re not.
[edit] disagree with the quotes statement, that is, agree with the post. I worded that poorly.
[edit edit] I've been thinking about what you get from using an RDBMS while avoiding writing or knowing much about SQL or using any of the probably-extremely-nice features of your particular SQL DB (why is everyone always so eager to be DB-agnostic? If you do it right your DB will survive, and greatly ease, several rewrites of your application!) and I'm coming up with:
1) basic locking (I assume you don't want to know how to actually use transactions, beyond what your ORM does automatically, so you're just getting the basics), and
2) some probably-badly-insufficient indices, and
3) an ORM should at least get you a little normalization if you just follow patterns from its docs & examples, I suppose, though you're gonna need to understand some SQL to get much benefit out of it in your DB design and in your use of the DB, so...
An entire RDBMS seems like serious overkill if that's all you're really using.
This statement is certainly provocative (great that it was the first thing picked up here :D) but I'm happy to explain our rationale for this a bit more.
SQL is an impressive technology and has stood the test of time! Yet, we claim that it's not the best tool for application developers who are paid to implement value-adding features for their organizations.
SQL is complex, it's easy to shoot yourself in the foot with and its data model (relational data / tables) is far away from the one application developers have (nested data / objects) when working in JS/TS. Mapping relational data to objects incurs a mental as well as a practical cost!
This is why we believe that in the majority of cases (which for most apps are fairly straightforward CRUD operations) developers shouldn't pay that cost. They should have an API that feels natural and makes them productive. That being said, for the 5% of queries that need certain optimizations, Prisma allows you to drop down to raw SQL and make sure your desired SQL statements are sent to the DB.
I see Prisma somewhat analogous to GraphQL on the frontend, where a similar claim could be: "Frontend developers should care about data, not REST endpoints". GraphQL liberates frontend developers from thinking about where to get their data from and how to assemble it into the structures they need. Prisma does the same by giving application developers a familiar and intuitive API.
> SQL is an impressive technology and has stood the test of time! Yet, we claim that it's not the best tool for application developers who are paid to implement value-adding features for their organizations.
I'm not sure about that. We've recently switched from JavaScript based querying code to mostly raw SQL, and we've reduced our code to about 25% of what it was, and it's much simpler to understand than it was before.
> I see Prisma somewhat analogous to GraphQL on the frontend, where a similar claim could be: "Frontend developers should care about data, not REST endpoints".
I'm not sure about Prisma, but IMO that GraphQL model isn't great. Realistically (for performance, etc) it will make a difference where that data came from. Not for super simple queries, but super-simple queries are super simple to do with REST/SQL anyway. I also feel like this distinction between front-end and back-end developers isn't great.
The GraphQL approach also leads to deployment issue with non-web clients. e.g. it can take a day to get an iOS build released, and there is no way to force users of older clients to update, so this can take months. So fixing a bad query is difficult. With a proper backend you can just swap out the query.
> Yet, we claim that it's not the best tool for application developers who are paid to implement value-adding features for their organizations.
You have two options when marrying RDBMS SQL and OO: either mismanage the relational data so that developers can use a class hierarchy, or stop using a class hierarchy to represent data.
IME, applications come and go. They get rewritten, thrown away, obsoleted. The database, however, is there to stay. Mismanaging the data purely so that the application developers don't have to touch all that icky relational stuff almost always results in more work for less returns.
I think SQL gets a lot of undeserved praise that I’m having a difficult time understanding. The only impressive thing about SQL is its prevalence but that’s a pretty poor yardstick unless one thinks that an appeal to popularity is an indicator of quality.
Now let me count the ways in which SQL is bad:
- it composes poorly due to its unwieldy cobolesque syntax
- it is a leaky abstraction revealing a lot of underlying implementation tradeoffs
- it doesn’t properly implement Cobb’s relational model
- it is poorly standardized with a ton of proprietary extensions and alterations present in virtually every implementation
- it is still poorly supported by tools because the model metadata lacks any standard interface to make universal tooling possible
A lot of the praise for SQL is just bandwagon hopping and cargo cult behaviour or a lack of vision by most people of how things could be much better
Oh don’t get me wrong - ORMs have their place and do enable higher agility. They do seem magical the first time you encounter them.
What I object to is the blanket “shouldn’t care about sql” statement because that’s what empowers people to use the ORM indiscriminately without understanding what’s under it (an understanding for which SQL is relevant) and then it’s the non-value-adding developers’ job to come in and untangle the mess, usually could have been avoided by dropping down one level, looking at the SQL that was generated (or maybe analyzing the query plan - again kind of hard if you don’t understand SQL) and realizing the ORM is doing something crazy.
Actually nested objects kinda suck even for frontend devs, if you're trying to keep things in sync efficiently.
Often times it's much better to be able to lookup some object by ID of the entity from some Map, than consuming endpoints returning some crazy nested partial data for the current view.
Depends on how much your app relies on client side caching and incremental sync of data.
That statement is so wrong. All ORMs suffer from leaky abstraction. And good ORMs would never hide that fact, nor would they try to deny native access to the database and force a all-or-nothing principle onto the developer.
If "application developers" care about data that is stored in a relational database, they have to care about the database itself and the access patterns (SQL).
ORMs _only_ provide convenience! They cannot substitute knowledge about the underlying technologies. And they cannot hide complexity, but shift it to the ORM.
I agree, and find the reluctance to care about SQL interesting.
Sure, SQL is not perfect, and has it’s flaws. But instead of learning an established, declarative language designed for data access, there are developers that would much rather learn new tools and libraries to avoid it. If you need to use another programming language for data access, you’re off looking for another ORM. Or the next, better ORM of the year comes out.
There are stored procedures at my work that are more than 20 years old. They don’t need to change when the application code calling it is uplifted.
It’s not that much different than a modern React dev not wanting to understand how a plain HTML form works.
Right. Data management is a hard problem. Just look at SQL query planners, they are insane - but they are not perfect, and they only work on the premises you provide (schemas, indexes, memory, etc). The idea that you could introduce a library that fixes all data management issues for you is not well considered.
Yeah, ORMs were marketed as "if your database changes, you don't have to change anything!". What about if your programming language changes? You have to learn an entirely new ORM each time. What about all of the people who have to read your code? Which ORM's do they know? Everyone knows SQL.
I agree. Clean and efficient abstractions (oxymoron?) to handle data management just doesn't exist. ORMs are great until they're not, at which point you do need to understand SQL and idiosyncrasies of the underlying relational database.
I tend to prefer avoiding ORMs as a preference because of this but unfortunately, development expectations in the current culture don't pad time enough to do this, so you're often reliant on an ORM and wait until things break before someone paying the bills will be swayed to allow developers to go in and clean things up. The "wait until it breaks to go fix and redesign everything" mentality isn't unique to ORMs though, it plagues any abstraction, framework, library, whatever that gets enough done to supply something mostly functional in a shorter time scale.
Not just this - I'm currently working on an application built by people with no understanding of databases beyond "they're kind of like key value stores". No attention is given to transactions, stale updates, locking mechanisms or anything more complicated than the equivalent of SELECT and UPDATE. It is an HTTP service running on many threads, so the problem is only amplified.
If you're going to use a SQL database, you need to understand what it is and what rules to obey to get its benefits. Hiding that complexity away makes people think they're writing a fast, concurrent high-availability (TM) service while it's actually a ball of mud that sometimes gets into the right shape.
While some people are hard lined enough on their anti-ORM stance that it sort of becomes weird, I agree with you that my beef with ORMs comes from being burned before a couple of times by a super inefficient aggregation ActiveRecord did on GROUP BY queries that it 1. not only took a really long time to figure out why a particular page in our app was loading slow but 2. we ended up having to write raw SQL to fix it.
I think the answer of whether to use one depends on the type and load/volume of app you're working with combined with the dynamics, size, and skill level of your team(s). I'm extremely comfortable writing, profiling, query planning, and debugging SQL queries. Others aren't, and therefore having an ORM to query data in the DB with the syntax of the language you're using in your projects makes way more sense, if nothing other in order to speed your team up.
Not sure when you used ActiveRecord, but slow query logging helps a lot to identify these issues. I also appreciate that in the docs for ActiveRecord they make it very clear that jumping to raw SQL is absolutely fine and normal, and they make the interface for doing that very nice.
I have yet to find the person who, being comfortable with both raw SQL and a particular ORM, to, when faced with the choice of learning a new ORM or using raw SQL, reach for the new ORM. To be honest, I haven't met in person anyone who reached for the familiar ORM, either, for anything complex in a team environment, but that's a more aggressive statement.
I mention this because to my mind it reinforces my belief that learning an ORM (which I've done in the past, and then never used again) isn't worth the effort; learning SQL is, if for no other reason than it's portable (and you'll have to learn it anyway when the ORM's abstraction leaks).
The answers is both. Devs cannot work on a system (involving "data") without knowing about that data/how it works/how it is structured and how to store and retrieve it. Pretending you can do one without the other leads to a boatload of issues in the medium and long term.
if you define yourself as a developer do yourself a favour and learn SQL. It's probably much easier than whatever programming language you use and will avoid you many headaches.
Over a year ago, I was investigating using Prisma to be the ORM for a GraphQL API of a Postgres database. When doing a proof-of-concept, I discovered that under the hood @prisma/client was spinning up it's own GraphQL server that it would send requests to in order to generate SQL to send to postgres. This extra middleware layer between my frontend code and postgres generated some pretty poor performing queries that took 50% longer to complete than the queries generated by using Hasura as our whole GraphQL API.
A quick glance at the @prisma/client code makes it seem like this pattern is still the case, since they have an "engine-core" package that downloads a binary server behind the scenes and runs it on a free port in the background.
I'm on the RedwoodJS core team and can 100% confirm early performance issues. That's no longer been the case recently. I've seen bulk operations perform just fine and even data pipeline use cases. Now the issue is DB performance in a serverless infra, which needs extra set up and config to perform consistently and at scale.
I'm completely perplexed at some of the functionality that's absent in Prisma? I'm coming from the Rails/Django world for reference. Can anyone help me understand if I'm out in left field or does this technology only cover basic use cases?
- No supported way to do a case-insensitive sorting. https://github.com/prisma/prisma/issues/5068
- Can’t sort by an aggregate value like user’s post count. https://github.com/prisma/prisma/issues/3821
- Can’t control between inner/left join.
- Can’t do subqueries.
- Migration rollbacks are experimental maybe unsupported now? At least I only see mention in Github issues and not in the docs.
- Transactions appear to expect a series of queries? It doesn’t look like you can execute any app code during a transaction?
- No support for pessimistic row locking e.g. SELECT… FOR UPDATE ?
- No way to mixin raw query partials like `where('name ILIKE ?')`. You either need to write the whole query raw or not.
- Validations are done at the database level.
- Complex validations seem tricky to write in this format
- No built-in way to make clean user-facing validation messages.
- You can’t check that a model instance is valid without just trying to insert it into the database
- The official documented validation example has you connecting via psql and adding a constraint?
- So following the offical example my validations aren’t documented in the codebase via a model or a migration?
- Also they don’t have a validation example documented if you’re using MySQL instead of Postgres?
- Cascading deletes are handled the same way as validations. As in Prisma basically does nothing other than document how to implement it yourself outside of the library.
- No model methods. I guess that's not a surprised because it's "not an ORM". A model really is just a data mapping? Anyways it seems like you would end up rolling your own wrapper around this and there's no recommendations on standardized architecture.
- No callbacks. These have been controversial at times so are teams using Prisma writing something akin to "services" instead?
- Syntax nitpick but one of these is vulnerable to a SQL injection and it seems really easy for a new developer to get mixed up?
- prisma.$queryRaw(`SELECT \* FROM User WHERE email = ${email}`);
- prisma.$queryRaw`SELECT \* FROM User WHERE email = ${email}`;
- No way of batch loading like Active Records’s find_in_batches / find_each. All objects are just loaded into memory?
- No way of hooking into queries for instrumentation. e.g. ActiveSupport::Notifications.subscribe
FYI - This is after fairly brief research. Not guaranteed 100% accurate.
Afaik many of these issues are why the company i worked for ended up giving up on Prisma. We ended up switching to Objection.js.
The biggest issue was the transactions which was a non starter for us. It wasn't very helpful when after explaining our use case and being told "we are doing it wrong" in the GH discussions, and instead were told to write rollback code manually instead of using transactions was a very poor answer.
I can clarify one point, relative to migration rollbacks. We indeed chose not to implement down migrations as they are in most other migration tools. Down migrations are useful in two scenarios: in development, when you are iterating on a migration or switching branches, and when deploying, when something goes wrong.
- In development, we think we already have a better solution. Migrate will tell you when there is a discrepancy between your migrations and the actual schema of your dev database, and offer to resolve it for you.
- In production, currently, we will diagnose the problem for you. But indeed, rollbacks are manual: you use `migrate resolve` to mark the migration as rolled back or forward, but the action of rolling back is manual. So I would say it _is_ supported, not just as convenient and automated as the rest of the workflows. Down migrations are somewhat rare in real production scenarios, and we are looking into better ways to help users recover from failed migrations.
So future features notwithstanding, is the typical Prisma workflow that if a migration failed during a production deploy the developer would have to manually work out how to fix it while the application is down?
The first thing I've looked at on your site is how migrations work. Because honestly, I think that's one of the best things about Django. They just got it right, and as you say, not many other tools get close.
I wonder if you have looked at how it works. Because they have put in something like a decade to make it work and it's very powerful and a joy to use.
Down migrations are indeed very useful and important once you get used to it. First and foremost they give you a very strong confidence in changing your schema. The last time I told someone who I helped with django to "always write the reverse migration" was yesterday.
No way you can automatically resolve the discrepancies you can get with branched development. Partially because you can use migrations to migrate the data not just to update the schema. It's pretty simple as long as we're just thinking about adding a few tables or renaming columns. You just hammer the schema into whatever the expected format is according to the migrations on that branch. But even that can go wrong: what if I introduced a NOT NULL constraint on a column in one of the branches and I want to switch over? Say my migration did set a default value to deal with it. Hammering won't help here.
The thing is that doing the way Django does it is not that hard (assuming you want to write a migration engine anyway). Maybe you've already looked at it, but just for the record:
- they don't use SQL for the migration files, but python (would be Typescript in your case). This is what they generate.
- the python files contain the schema change operations encoded as python objects (e.g. `RenameField` when a field gets renamed and thus the column has to be renamed too, etc.).
- they generate the SQL to apply from these objects
Now since the migration files themselves are built of python objects representing the needed changes, it's easy for them to have both a forward and the backward migration for each operation. Now you could say that it doesn't allow for customization, but they have two special operations. One is for running arbitrary SQL (called RunSQL (takes two params: one string for the forward and one for the backward migration) and one is for arbitrary python code (called RunPython, takes two functions as arguments: one for the forward and one for the backward migration).
One would usually use RunSQL to do the tricky things that the migration tool can't (e.g. add db constraints not supported by the ORM) and RunPython to do data migrations (when you actually need to move data around due to a schema change). And thanks to the above architecture you can actually use the ORM in the migration files to do these data migrations. Of course, you can't just import your models from your code because they will have already evolved if you replay older migrations (e.g. to set up a new db or to run tests). But because the changes are encoded as python objects, they can be replayed in the memory and the actual state valid at the time of writing the migration can be reconstructed.
And when you are creating a new migration after changing your model you are actually comparing your model to the result of this in-memory replay and not the db. Which is great for a number of reasons.
@nikolasburk can you comment? Before compiling this list I was seriously considering using Prisma to kick off a big upcoming project. In the spirit of being open-minded I'd really like to know if I'm fundamentally misunderstanding Prisma capabilities.
This is a pretty extensive list! Are all of these points are full blockers for you or are there individual points that you care more about while others might be rather "nice to have"? If most of them are real blockers, you probably shouldn't use Prisma [1], and that's ok :)
Prisma certainly is not perfect and whether you should use it depends on your project and individual requirements. What I can tell you is that we are shipping releases [2] with new features and improvements every two weeks. We are also very eager to learn about more use cases that people want to accomplish with Prisma. The best way to bring these to our attention is by commenting on existing GitHub issues and creating new ones if the one for your use case doesn't exist yet. This helps us prioritze and implement these new features. We also have a roadmap [3] where you can see all the features that we are currently working on.
I’ve been using Prisma for a while and I quite like it. Best ORM for TypeScript I’ve used.
My biggest issue with it is testability.
Sure, I can mock the Prisma client in tests with Jest or something, but if I want to test state I pretty much have to reimplement an in-memory DB using JS mocks.
I can also connect to a real DB made for testing but it’s quite overkill, I’m usually not interested in testing Prisma itself, just my own code.
I wish Prisma had test mode, where you could replace the client with a temporary in-memory SQLite DB without writing tons of boilerplate.
A drop-in replacement for PrismaClient for tests would have been wonderful.
Could you elaborate on why you find a real DB overkill for testing purposes?
Typically you would have a set of unit tests which test your code only and use mocks for external dependencies, in this case you would mock Prisma.
Tests involving DB state are not unit tests anymore and require a bit more setup but most frameworks provide a test harness/runner to quickly spin up a part of your app with dependencies resolved (instead of mocked) so you can call controller/handler/service methods directly and verify their behavior.
The boilerplate for this should mostly be limited to switching out an environment variable for the database connection and maybe setup some fixtures. Is this not possible in Prisma?
I wonder if its possible to start a transaction at the beginning of each test and then rollback. That way we just need a db to run but it will never really be written to.
Or maybe just invest in this: https://github.com/oguimbal/pg-mem/issues/27 He has already put a lot of effort into it and I think with just a bit more support he can support Prisma
Isn't that pretty much always the dilemma with database stuff and testing?
On the one hand, you can wrap your DB operations in some kind of mockable interface(s), and then test your business logic.
On the other hand, you really need to test that the actual SQL ops work anyway- what if you screwed up a foreign key constraint?- that wouldn't show up in your business logic tests or your in-memory SQLite or whatever.
I'm sure a lot of hard work went into this, so I mean no offense here when I give my honest feelings.
The big differences between v1 and v2 make me uneasy. I am reminded of the constant API churn of React-Router, or the complete change from AngularJS to Angular. This approach makes me very hesitant to learn the library, because I am afraid of having the choice of painful upgrade or a dead dependency down the road.
I am also wary of the library being VC-funded, because I am afraid of what kinds of features will be held at ransom down the line once they need to start making money.
Prisma is different. It’s absolutely got rough edges, but the extremely strong type safety makes Sequelize look like a joke. The query engine itself, written in rust, combines and optimizes queries inside every tick of the event loop so GraphQL N+1 issues are a thing of the past.
Also, the team and community behind it are amazing! I never thought having an active dev community behind an ORM would be important, but as the author of Sequelize-Typescript was forced to abandon it late last year and the author of TypeORM was also pretty much absent, Prisma was a breath of fresh air. I REALLY hope they can find a way to build a sustainable business out of it. Support packages, feature development contracts, something to keep them financially incentivized to keep making it better.
Happy to answer any questions about my experience using it if anyone has any.
But I used it for years and it was always buggy. Sometimes options wouldn't work correctly because the authors liked to do that clever JavaScript thing where they write `foo = optional_thing || default_value` even for BOOLEAN options. I always had issues with it not being able to truncate/drop tables correctly that had foreign keys (it didn't attempt to order them intelligently), some of the options weren't compatible with each other even though they were orthogonal (I'm thinking about the snake_casing options and the created_at/updated_at column features), etc, etc.
It just... didn't actually work. Over many versions. I think I used it from late 3.x somewhere through 5.x.
Maybe it's awesome now, but I doubt it.
I don't think I've ever called out on a library, or not felt truly thankful that it was there to help me. But Sequelize... man...
When I first got into Node, and got Sequelize, after working for years with .NET, Entity Framework, NHibernate and the likes, it just felt like horrible-everything. I've forced myself to use it in some projects, because node is cheap, and I kept thinking that I'm missing something. Some brilliance behind the questionable... everything. No. I can't even bring myself to think about that design mess. Sorry for the rant, Sequelize makes me feel insecure, and little, in the chaos of the Universe.
Huh, care to explain this one? I’m using Prisma with Apollo Server without doing anything fancy in my resolvers. I just assumed I’m getting N+1 issues but didn’t bother to optimize yet.
Then Nikolas Burk reached out to me and did his absolute best to convert me, but I was stuck in my ways, it HAD to behave like Eloquent, or it was not good enough and that DSL layer? No thanks, I don't like it.
Started writing code for my own ORM when I was like "What the heck am I doing? this gets me nowhere" And messaged Nikolas back saying I was dropping everything and planned to give Prisma a real solid try.
I'm so glad I did. I LOVE it now. I consider it a part of my GOAT stack.
This one I will believe when I see it. It would be a general solution to cache invalidation.
They have a whole blog article about why monitoring is important but don't seem come with it: https://www.prisma.io/blog/monitoring-best-practices-monitor...
This is a common misconception that stems from our history as a company and being early contributors to the GraphQL ecosystem. With the move to Prisma 2 however, there is no native GraphQL layer in Prisma any more. I've talked about this extensively in a recent livestream on Youtube [1] if you want to learn more :) There's also this article "How Prisma and GraphQL fit together" [2] that explains the historic dimensions of this if you're interested.
> It's unclear what's the pros are of Prisma compared to TypeORM.
I guess you could argue that one benefit is the superior type-safety Prisma provides [3]. Other folks have also called out that they prefer way how data is modeled with Prisma (via the Prisma schema), the migration system as well as the active maintenance, regular releases, the active community, the support and thorough documentation of Prisma. Ultimately it'll come down to your personal preference though which one is the more appropriate for your project :)
[1] https://www.youtube.com/watch?v=hMWMPpy4ta4&list=PLz8Iz-Fnk_...
[2] https://www.prisma.io/blog/prisma-and-graphql-mfl5y2r7t49c
[3] https://www.prisma.io/docs/concepts/more/comparisons/prisma-...
To use an ORM effectively you must learn both the API of the ORM and SQL. It does not absolve you from learning SQL, which I think was one of the unstated attractions to junior developers. Then you have to map between the API of the ORM and how it translates that to SQL under the hood. Then you have to understand how it handles caching and sessions and how that all works under the hood. This has been the source of so much complexity and so many bugs over the years that I no longer think the benefits are worth the cost. As always, there are exceptions and caveats, but I now believe it's better to use SQL directly than to use an ORM. In general there is too much complexity in all the layers of abstraction in software development these days, and I think the industry is strangely blind to the consequences of this. Complexity is death to software, and it should not be taken on lightly. The entire craft of a software engineer boils down to eliminating complexity and simplifying problems, the better you can do that, the more productive an engineer you will be.[1]
[1] https://github.com/sqljoy/sqljoy/blob/master/docs/pages/orm....
Well, I think if you're not blind to those things to some degree, you're stuck forever in a best-practices search, which I feel like more often than not ends up implying you have to rewrite your stack.
For example, I do most of my development thru Hasura now. It writes insanely performant SQL queries translated directly from GraphQL. As a data consumer, you only have to worry about the shape of the data you want, and none of the implementation details. There is one endpoint, and you pass it queries against the schema it exposes. Implementation details are abstracted heavily here, but in a "good" separation of concerns way.
I will never hand-write SQL again. Why would I? My front-ends can load everything in my data model expressively and with type safety thanks to graphql-codegen.
Now imagine you read this and realize you work in a completely different method and this answers some of your problems. You may start parallel development, but you certainly can't stop forward progress just to evaluate your stack. So I think the blinders are key to progressing in the short term despite the productivity losses you're taking in the long term, and then when you phrase it like this it is surely no surprise companies behave like that.
Can you expand on that? The chain of reasoning is probably clear to you, but I can't follow it.
> For example, I do most of my development thru Hasura now. It writes insanely performant SQL queries translated directly from GraphQL. As a data consumer, you only have to worry about the shape of the data you want, and none of the implementation details. There is one endpoint, and you pass it queries against the schema it exposes. Implementation details are abstracted heavily here, but in a "good" separation of concerns way.
I like Hasura a lot, I've advocated it at a previous company I worked for, and after many months of meetings I got it approved to add to our stack - which was then never prioritized, but that's another story.
> I will never hand-write SQL again. Why would I? My front-ends can load everything in my data model expressively and with type safety thanks to graphql-codegen.
I don't think writing GraphQL is superior to writing SQL, and again you have to think not just in terms of GraphQL but also in terms of how Hasura translates that to SQL under the hood. I am slightly biased though, because I'm creating on a framework to query the db using SQL from the frontend, I'm curious how you think that would compare with Hasura: https://github.com/sqljoy/sqljoy
> Now imagine you read this and realize you work in a completely different method and this answers some of your problems. You may start parallel development, but you certainly can't stop forward progress just to evaluate your stack.
I'm not really sure how this connects to my argument about complexity, although I think the link is clear to you, can you expand on that?
I reached the same conclusion. Everything is so much simpler that way. I ask the database for the data, the database gives it to me. That's the end of it.
I used to like highly abstract libraries but the truth is they provide the lowest common denominator in features at a huge cost in complexity. Now I'd rather work as closely to the implementation as possible. I feel like I actually understand how stuff works now.
It would contrast against traditional ("stateful") ORMs which use query builders (rather than raw SQL) to return database-aware (rather than pure) objects.
The name pureORM reflects both that it is pure "ORM" (there is no query builder dimension) as well as the purity of the mapped Objects.
[1] https://github.com/craigmichaelmartin/pure-orm
I find ORMs very convenient and useful for simple queries, and especially for inserts/updates with many-to-many relations. And that is a very large part of a typical application. But you can very quickly meet the limits of an ORM with more complex queries, and I find it easier to just step down to SQL in those cases before learning the more complex parts of the ORM.
- The generated client is top-tier. Fully specified in TypeScript with intelligent types that can be extended by the user.
- The schema language is great. It provides a cohesive experience that can fully express your database structure, and it also provides a migration framework to manage that structure's evolution.
- There's no hook for implementing access control at the model level, so you end up needing to create a higher-level API around the base queries to implement these yourself, which is a fairly large investment. I'd love to see a pair of functions that can modify the query before it gets sent to the Prisma engine to add extra query values, and a second function that can filter models before they are returned to the caller.
Where Prisma falls short is in testing. The testing story in Prisma is that you can either use a live database, or you can completely stub out all calls to Prisma in your application. The latter means you end up writing a bunch of tests for implementation rather than for behavior, or you end up manually writing an in-memory database that fits the Prisma API. The former means that you need to completely recreate your database after every test case.
The discussions on the Github seem to indicate that the old "wrap the test in a transaction" trick is forbidden, and Prisma seems architecturally set up to ensure that you can't even hack this behavior in.
All in all, we probably won't switch away from it, but I will continue to look for a better way to test my endpoints.
On testing, we're working on some documentation right now on how to mock the Prisma Client. I could see us generating a mockable client to help with this.
Bit of setup effort but well worth it IME even in situations where your tooling isn't strongly pushing you towards the approach.
2) It is at least an order of magnitude slower than not using a database at all.
Combined this makes for very slow tests. Certainly for larger applications with thousands of tests.
Having said that, I don't think there is a single right answer to the problem of testing applications that use a database and your suggestion still can be a valid solution.
What do you think of using something like msw.io for mocking your endpoints as needed?
The website is https://mswjs.io/
Anyone who has been bitten by an ORM generating inefficient SQL (and subsequent having to learn and care about SQL) knows the above not to be true.
Which, yes, is a common state for many applications to be in. Wresting with SQL while using it very poorly in an effort to act like they’re not.
[edit] disagree with the quotes statement, that is, agree with the post. I worded that poorly.
[edit edit] I've been thinking about what you get from using an RDBMS while avoiding writing or knowing much about SQL or using any of the probably-extremely-nice features of your particular SQL DB (why is everyone always so eager to be DB-agnostic? If you do it right your DB will survive, and greatly ease, several rewrites of your application!) and I'm coming up with:
1) basic locking (I assume you don't want to know how to actually use transactions, beyond what your ORM does automatically, so you're just getting the basics), and
2) some probably-badly-insufficient indices, and
3) an ORM should at least get you a little normalization if you just follow patterns from its docs & examples, I suppose, though you're gonna need to understand some SQL to get much benefit out of it in your DB design and in your use of the DB, so...
An entire RDBMS seems like serious overkill if that's all you're really using.
SQL is an impressive technology and has stood the test of time! Yet, we claim that it's not the best tool for application developers who are paid to implement value-adding features for their organizations.
SQL is complex, it's easy to shoot yourself in the foot with and its data model (relational data / tables) is far away from the one application developers have (nested data / objects) when working in JS/TS. Mapping relational data to objects incurs a mental as well as a practical cost!
This is why we believe that in the majority of cases (which for most apps are fairly straightforward CRUD operations) developers shouldn't pay that cost. They should have an API that feels natural and makes them productive. That being said, for the 5% of queries that need certain optimizations, Prisma allows you to drop down to raw SQL and make sure your desired SQL statements are sent to the DB.
I see Prisma somewhat analogous to GraphQL on the frontend, where a similar claim could be: "Frontend developers should care about data, not REST endpoints". GraphQL liberates frontend developers from thinking about where to get their data from and how to assemble it into the structures they need. Prisma does the same by giving application developers a familiar and intuitive API.
I'm not sure about that. We've recently switched from JavaScript based querying code to mostly raw SQL, and we've reduced our code to about 25% of what it was, and it's much simpler to understand than it was before.
> I see Prisma somewhat analogous to GraphQL on the frontend, where a similar claim could be: "Frontend developers should care about data, not REST endpoints".
I'm not sure about Prisma, but IMO that GraphQL model isn't great. Realistically (for performance, etc) it will make a difference where that data came from. Not for super simple queries, but super-simple queries are super simple to do with REST/SQL anyway. I also feel like this distinction between front-end and back-end developers isn't great.
The GraphQL approach also leads to deployment issue with non-web clients. e.g. it can take a day to get an iOS build released, and there is no way to force users of older clients to update, so this can take months. So fixing a bad query is difficult. With a proper backend you can just swap out the query.
You have two options when marrying RDBMS SQL and OO: either mismanage the relational data so that developers can use a class hierarchy, or stop using a class hierarchy to represent data.
IME, applications come and go. They get rewritten, thrown away, obsoleted. The database, however, is there to stay. Mismanaging the data purely so that the application developers don't have to touch all that icky relational stuff almost always results in more work for less returns.
Now let me count the ways in which SQL is bad:
- it composes poorly due to its unwieldy cobolesque syntax
- it is a leaky abstraction revealing a lot of underlying implementation tradeoffs
- it doesn’t properly implement Cobb’s relational model
- it is poorly standardized with a ton of proprietary extensions and alterations present in virtually every implementation
- it is still poorly supported by tools because the model metadata lacks any standard interface to make universal tooling possible
A lot of the praise for SQL is just bandwagon hopping and cargo cult behaviour or a lack of vision by most people of how things could be much better
What I object to is the blanket “shouldn’t care about sql” statement because that’s what empowers people to use the ORM indiscriminately without understanding what’s under it (an understanding for which SQL is relevant) and then it’s the non-value-adding developers’ job to come in and untangle the mess, usually could have been avoided by dropping down one level, looking at the SQL that was generated (or maybe analyzing the query plan - again kind of hard if you don’t understand SQL) and realizing the ORM is doing something crazy.
Often times it's much better to be able to lookup some object by ID of the entity from some Map, than consuming endpoints returning some crazy nested partial data for the current view.
Depends on how much your app relies on client side caching and incremental sync of data.
That statement is so wrong. All ORMs suffer from leaky abstraction. And good ORMs would never hide that fact, nor would they try to deny native access to the database and force a all-or-nothing principle onto the developer.
If "application developers" care about data that is stored in a relational database, they have to care about the database itself and the access patterns (SQL).
ORMs _only_ provide convenience! They cannot substitute knowledge about the underlying technologies. And they cannot hide complexity, but shift it to the ORM.
Sure, SQL is not perfect, and has it’s flaws. But instead of learning an established, declarative language designed for data access, there are developers that would much rather learn new tools and libraries to avoid it. If you need to use another programming language for data access, you’re off looking for another ORM. Or the next, better ORM of the year comes out.
There are stored procedures at my work that are more than 20 years old. They don’t need to change when the application code calling it is uplifted.
It’s not that much different than a modern React dev not wanting to understand how a plain HTML form works.
We should only have to care about data and not SQL. But the reality is that we very much do have to care about SQL.
I tend to prefer avoiding ORMs as a preference because of this but unfortunately, development expectations in the current culture don't pad time enough to do this, so you're often reliant on an ORM and wait until things break before someone paying the bills will be swayed to allow developers to go in and clean things up. The "wait until it breaks to go fix and redesign everything" mentality isn't unique to ORMs though, it plagues any abstraction, framework, library, whatever that gets enough done to supply something mostly functional in a shorter time scale.
If you're going to use a SQL database, you need to understand what it is and what rules to obey to get its benefits. Hiding that complexity away makes people think they're writing a fast, concurrent high-availability (TM) service while it's actually a ball of mud that sometimes gets into the right shape.
I think the answer of whether to use one depends on the type and load/volume of app you're working with combined with the dynamics, size, and skill level of your team(s). I'm extremely comfortable writing, profiling, query planning, and debugging SQL queries. Others aren't, and therefore having an ORM to query data in the DB with the syntax of the language you're using in your projects makes way more sense, if nothing other in order to speed your team up.
I mention this because to my mind it reinforces my belief that learning an ORM (which I've done in the past, and then never used again) isn't worth the effort; learning SQL is, if for no other reason than it's portable (and you'll have to learn it anyway when the ORM's abstraction leaks).
A quick glance at the @prisma/client code makes it seem like this pattern is still the case, since they have an "engine-core" package that downloads a binary server behind the scenes and runs it on a free port in the background.
O_____O
The biggest issue was the transactions which was a non starter for us. It wasn't very helpful when after explaining our use case and being told "we are doing it wrong" in the GH discussions, and instead were told to write rollback code manually instead of using transactions was a very poor answer.
- In development, we think we already have a better solution. Migrate will tell you when there is a discrepancy between your migrations and the actual schema of your dev database, and offer to resolve it for you.
- In production, currently, we will diagnose the problem for you. But indeed, rollbacks are manual: you use `migrate resolve` to mark the migration as rolled back or forward, but the action of rolling back is manual. So I would say it _is_ supported, not just as convenient and automated as the rest of the workflows. Down migrations are somewhat rare in real production scenarios, and we are looking into better ways to help users recover from failed migrations.
I wonder if you have looked at how it works. Because they have put in something like a decade to make it work and it's very powerful and a joy to use.
Down migrations are indeed very useful and important once you get used to it. First and foremost they give you a very strong confidence in changing your schema. The last time I told someone who I helped with django to "always write the reverse migration" was yesterday.
No way you can automatically resolve the discrepancies you can get with branched development. Partially because you can use migrations to migrate the data not just to update the schema. It's pretty simple as long as we're just thinking about adding a few tables or renaming columns. You just hammer the schema into whatever the expected format is according to the migrations on that branch. But even that can go wrong: what if I introduced a NOT NULL constraint on a column in one of the branches and I want to switch over? Say my migration did set a default value to deal with it. Hammering won't help here.
The thing is that doing the way Django does it is not that hard (assuming you want to write a migration engine anyway). Maybe you've already looked at it, but just for the record:
- they don't use SQL for the migration files, but python (would be Typescript in your case). This is what they generate. - the python files contain the schema change operations encoded as python objects (e.g. `RenameField` when a field gets renamed and thus the column has to be renamed too, etc.). - they generate the SQL to apply from these objects
Now since the migration files themselves are built of python objects representing the needed changes, it's easy for them to have both a forward and the backward migration for each operation. Now you could say that it doesn't allow for customization, but they have two special operations. One is for running arbitrary SQL (called RunSQL (takes two params: one string for the forward and one for the backward migration) and one is for arbitrary python code (called RunPython, takes two functions as arguments: one for the forward and one for the backward migration).
One would usually use RunSQL to do the tricky things that the migration tool can't (e.g. add db constraints not supported by the ORM) and RunPython to do data migrations (when you actually need to move data around due to a schema change). And thanks to the above architecture you can actually use the ORM in the migration files to do these data migrations. Of course, you can't just import your models from your code because they will have already evolved if you replay older migrations (e.g. to set up a new db or to run tests). But because the changes are encoded as python objects, they can be replayed in the memory and the actual state valid at the time of writing the migration can be reconstructed.
And when you are creating a new migration after changing your model you are actually comparing your model to the result of this in-memory replay and not the db. Which is great for a number of reasons.
Prisma certainly is not perfect and whether you should use it depends on your project and individual requirements. What I can tell you is that we are shipping releases [2] with new features and improvements every two weeks. We are also very eager to learn about more use cases that people want to accomplish with Prisma. The best way to bring these to our attention is by commenting on existing GitHub issues and creating new ones if the one for your use case doesn't exist yet. This helps us prioritze and implement these new features. We also have a roadmap [3] where you can see all the features that we are currently working on.
Hope that helps for now!
[1] https://www.prisma.io/docs/concepts/overview/should-you-use-...
[2] https://github.com/prisma/prisma/releases
[3] http://pris.ly/roadmap
A drop-in replacement for PrismaClient for tests would have been wonderful.
Typically you would have a set of unit tests which test your code only and use mocks for external dependencies, in this case you would mock Prisma.
Tests involving DB state are not unit tests anymore and require a bit more setup but most frameworks provide a test harness/runner to quickly spin up a part of your app with dependencies resolved (instead of mocked) so you can call controller/handler/service methods directly and verify their behavior.
The boilerplate for this should mostly be limited to switching out an environment variable for the database connection and maybe setup some fixtures. Is this not possible in Prisma?
It's a tricky problem because our use of the type-system is quite complex.
I quite like your idea of a test mode or maybe a mock client that we generate.
Or maybe just invest in this: https://github.com/oguimbal/pg-mem/issues/27 He has already put a lot of effort into it and I think with just a bit more support he can support Prisma
Yes please!! :)
On the one hand, you can wrap your DB operations in some kind of mockable interface(s), and then test your business logic.
On the other hand, you really need to test that the actual SQL ops work anyway- what if you screwed up a foreign key constraint?- that wouldn't show up in your business logic tests or your in-memory SQLite or whatever.
The big differences between v1 and v2 make me uneasy. I am reminded of the constant API churn of React-Router, or the complete change from AngularJS to Angular. This approach makes me very hesitant to learn the library, because I am afraid of having the choice of painful upgrade or a dead dependency down the road.
I am also wary of the library being VC-funded, because I am afraid of what kinds of features will be held at ransom down the line once they need to start making money.