I love Kysely! I use it for a couple of production applications and it works quite well. Type support is better than Knex, and Kysely is really lightweight so I can use it in my projects without being concerned about performance issues.
Really, when you look at options like this, you start to break them down into 3 distinct categories:
1. Raw adapter usage - writing SQL. Performant, but can get tedious at scale, and weird to add types to.
2. Knex/Kysely, lightweight query builders. Readable, composable, and support types well, but a step removed from the performance of (1). Some would argue (1) is more universally understandable, too, although query builders make things easy for those more-familiar with programming languages than query languages.
3. Full-fledged ORMs like TypeORM, Sequelize, Prisma. Often require much more ecosystem buy-in and come with their own performance issues and problems.
I usually choose 1/2 depending on the project to keep things simple.
I have pretty much had no issue with it so far. The only thing that I would call out is that you _must_ run a migration initially to set things up, or your queries will hang. This has stumped me a few times (despite being obvious after-the-fact). It also interfaces really well with postgres, and has nice support for certain features (like jsonb).
You might also want to consider pgTyped (https://github.com/adelsz/pgtyped). It's supposed to make SQL and TS work together. I haven't gotten around to using it yet but I hear good things.
Can weigh in here, we use pgtyped heavily at work and it’s really good. There are some constraints with the type inference on complex queries, but it’s a decent trade off IMO.
I mostly do work on the Python side of things and SQLAlchemy is the de facto ORM there. I hate it. It’s heavy. Opinionated. The happy path seems to be doing things in sessions where joins foreign keys and things are evaluated in the app instead of at the DB level (one can define relationships in models for the ORM to understand but not define explicit foreign key relationships at the DB table level, wtf? Thereby doing joins based on common columns…) and yet I can’t fault ORMs for their ability to get you going fast and help you iterate very quickly.
I recently had the chance to start a new side project and trying to go the pure SQL route which I love was so slow in terms of productivity. When I could just model the tables via SQLAlchemy I was able to get to the meat of the thing I was making much quicker. What I didn’t like was all the additional cognitive overhead but I gained DB agnostic ways of querying my data and could use say SQLite for testing and then say Postgres for staging or production by just changing a config whereas if I write pure SQL I might get into issues where the dialects are different such that that flexibility is not there.
In the end I am very conflicted. Oh, some context I began my professional life as a DBA and my first love was SQL. I like writing queries and optimizing them and knowing exactly what they’re doing etc.
We use in prod variant of no 1. [0]. Why? Because:
* it's sql
* it's extremely lightweight (built on pure, functional combinators)
* it allows us to use more complex patterns ie. convention where every json field ends with Json which is automatically parsed; which, unlike datatype alone, allows us to create composable query to fetch arbitrarily nested graphs and promoting single [$] key ie. to return list of emails as `string[]` not `{ email: string }[]` with `select email as [$] from Users` etc.
* has convenience combinators for things like constructing where clauses from monodb like queries
* all usual queries like CRUD, exists etc. and some more complex sql-wise but simple function-api-wise ie. insertIgnore list of objects, merge1n, upsert etc all have convenient function apis and allow for composing whatever more is needed for the project
We resort to runtime type assertions [1] which works well for this and all other i/o; runtime type assertions are necessary for cases when your running service is incorrectly attached to old or future remote schema/api (there are other protections against it but still happens).
I would prefer 1. if it had full type safety. I’m imagining some sort of a build process that looks for any .sql files in the project and spits out .sql.d.ts in the same directory with the input and output types according to the current schema of the database. Another nice thing about a setup like this imo would be that the .sql files would have the full support of the editor for completions and execution, unlike sql fragments in a Typescript file.
F# can do this, either as part of the compilation via type providers [1] or, in a more lightweight manner, via Roslyn analyzers embedded in the editor [2].
Unsurprisingly, Haskell can also do this via Template Haskell [3], but I haven't used it.
I made a tool that generates Typescript types out of a live Postgres database. I've had a request for Kysely support ([link redacted]), but I more or less forgot about it. I would love to hear if that would be helpful.
Dang, how do they implement the “parse text strings and generate types immediately available in auto-complete” thing?? I can see how you could do that with Rust Macros, but how do they do it in TS?
Eg. this thing:
blah.select(['pet.name as pet_name'])
is inferred to return a type with a `pet_name` field, parsing the contents of the SQL expression?
You can use these template literal types + infer to build an entire SQL parser. I did a POC that infers SQL query types by parsing the SQL query on a type level:
Building this parser is pretty cumbersome and supporting multiple SQL dialects would be lots of pain. While I'm not a fan of query builders per se, Kysely pretty much covers everything that my POC tried to cover (except that 0 runtime overhead). However, you get the option to use different DBMs in tests than in production (pg in prod, sqlite in tests), which is a huge benefit for a lot of people. sequelts was designed to work with sqlite only. And it's a hack.
The typescript type system is insane, I solved quite a few project Euler tasks with it in the type system itself (From peano numbers up). I learned to never question the TS community
This is really cool, will look into using it in future projects!
I also made a tool (https://github.com/vramework/schemats) that generates the types directly from the db, which means whenever you do a DB migration your database types automatically update. Was forked from the original schemats library a couple years ago.
I also created a lightweight library ontop of pg that is less of a query builder and more of a typed CRUD + SQL for non trivial queries (https://github.com/vramework/postgres-typed). Most queries I deal with in a day to day is usually crud so I find it a little easier, but it's much less powerful then Kysely! I fall more into the camp of writing complex queries in SQL with small helpers and writing simple ones with util functions and typescript.
Edit: Will be looking into cleaning up docs and tests next month. Right now everything is in the ReadMe and examples
What a beautiful api. As a Knex user I appreciate the design influence of Knex, which similarly does not try to give you a full orm but just a structured builder for SQL queries. But I agree that Knex was always a little weird in that it only would execute the query if it was being listened for (so `query.getSql()` or something would get you the stringified query, while `await query` would actually execute it) and had some other quirks (in particular different ways of specifying the root table’s name at the position in the FROM depending on what you were doing).
The TypeScript integration is nice too, I also have treated TS this way as “programmable autocomplete for VS Code.” I will say that doesn't make it super maintainable usually but that's not an issue for the 0.x.x releases of course.
Reminds me of Prisma in its type safety, any major differences? I see that it's a query builder but when I used them in the past, they honestly didn't feel that different to ORMs.
In Rust, there is sqlx which lets you write SQL but checks at compile time whether the SQL is valid for the database, by connecting to the database, performing the transaction then rolling back, picking up and displaying any errors along the way.
Now with Prisma, I like it since it provides one unified database schema that I can commit into git (which avoids the problem of overlapping migrations from team members simultaneously working on separate branches that then need to be merged back in; with a git compatible schema, we must handle merge conflicts) and be able to transport across languages. I recently ported a TypeScript project to Rust and the data layer at least was very easy due to this. I used Prisma Client Rust for it, which is the beauty of having a language agnostic API, you can generate your own clients in whatever language you want.
How does this compare to say Prisma? I want to write SQL more than I want to write Javascript. I got really hung up on writing joins with Prisma and don't want to use a raw query. How would this compare assuming they're comparable, thanks in advance.
Prisma is good for writing simple selects, updates, and deletes. But the moment you need to write any kind of advanced query with joins, nested queries, unions, etc. it gets frustrating very quickly.
Kysely and Knex are far more flexible for writing complex queries and don't get in your way.
I see a kysely + kysely-codegen (generates types from DB schema) setup as comparable to Prisma in TS integration, with the added flexibility/closeness-to-SQL of the querybuilder.
If you:
- have used/liked Knex (or similar querybuilders) before
- like the TS integration + type safety of Prisma
- but find Prisma to be a bit too magic/heavy with its query engine and schema management
The main difference to me is that with Prisma I don't need to do any SQL to get a project up and running. I can define the tables and relations from my code or from Prisma Studio, and in one swoop I get the db structure done and the types defined.
I know my way around a database, but I'd rather not leave my code editor whenever I need to add a new column to a table.
With Kysely you have to create the DB schema, and then write the types; with every change you need, you gotta do both again.
(At least this seems true by default; as the project's readme mentions, there is a code generator[1] to generate the types from the DB schema; not quite the same but at least it's better than nothing.)
Commenters say that writing complex queries with Kysely is easier, which makes me wonder if I could use Prisma except for those, since Kysely should be able to just generate the SQL query for me to handle to Prisma...
As someone who uses both prisma and knex (so something like kysely):
Prisma is very very nice, it has maybe the best DX out there. My issue with it is performance. It is much slower than query builders or raw sql. It's also a huge black box, although I haven't had any issues, you're dealing with a complex beast.
Knex (and other query builders) is nicer than raw sql, has good performance, and it's fairly transparent (it's not a 800 pound gorilla like Prisma)
I think that is particularly what I like about this. With Prisma optimising your code is very hard because you cannot just customise joins like here. From the example, it seems that you can create joins and it also shows you in the documentation what the actual SQL for that will look like.
I still have Prisma running on my projects, so it will be a bit hard to move now particularly because it has TS native migrations, which is another issue. If I wanted to use these outside of TypeScript (let's say another service or middleware), then it would be very hard.
I started using zapatos in a project but eventually moved away from it. Their shortcuts and lateral join support is nice, but you very often need to write raw sql and manually provide types and ensure they are correct for all these cases.
It is also possible but awkward to use interpolations to construct very dynamic queries where based on filter conditions you need different joins or unions etc. I looked at some of the solutions that infer ts types from sql queries but eventually felt it was more maintenable to keep the dynamic query generation on the ts ide
I found ts-sql-query [1] to be much better suited for my use cases. It is very feature rich and has very good support for various dialect specific features in all mainstream databases. Also Juan (the author) is very helpful with queries and suggestions.
I was wondering what it meant. It's too bad English has a different pronunciation for "y", because the specified pronunciation Key-Seh-Lee means something totally different in Finnish (kiisseli, which is a type of dessert, a thick fruit soup that was a common tradition in my family).
Really, when you look at options like this, you start to break them down into 3 distinct categories:
1. Raw adapter usage - writing SQL. Performant, but can get tedious at scale, and weird to add types to.
2. Knex/Kysely, lightweight query builders. Readable, composable, and support types well, but a step removed from the performance of (1). Some would argue (1) is more universally understandable, too, although query builders make things easy for those more-familiar with programming languages than query languages.
3. Full-fledged ORMs like TypeORM, Sequelize, Prisma. Often require much more ecosystem buy-in and come with their own performance issues and problems.
I usually choose 1/2 depending on the project to keep things simple.
I have pretty much had no issue with it so far. The only thing that I would call out is that you _must_ run a migration initially to set things up, or your queries will hang. This has stumped me a few times (despite being obvious after-the-fact). It also interfaces really well with postgres, and has nice support for certain features (like jsonb).
I recently had the chance to start a new side project and trying to go the pure SQL route which I love was so slow in terms of productivity. When I could just model the tables via SQLAlchemy I was able to get to the meat of the thing I was making much quicker. What I didn’t like was all the additional cognitive overhead but I gained DB agnostic ways of querying my data and could use say SQLite for testing and then say Postgres for staging or production by just changing a config whereas if I write pure SQL I might get into issues where the dialects are different such that that flexibility is not there.
In the end I am very conflicted. Oh, some context I began my professional life as a DBA and my first love was SQL. I like writing queries and optimizing them and knowing exactly what they’re doing etc.
* it's sql
* it's extremely lightweight (built on pure, functional combinators)
* it allows us to use more complex patterns ie. convention where every json field ends with Json which is automatically parsed; which, unlike datatype alone, allows us to create composable query to fetch arbitrarily nested graphs and promoting single [$] key ie. to return list of emails as `string[]` not `{ email: string }[]` with `select email as [$] from Users` etc.
* has convenience combinators for things like constructing where clauses from monodb like queries
* all usual queries like CRUD, exists etc. and some more complex sql-wise but simple function-api-wise ie. insertIgnore list of objects, merge1n, upsert etc all have convenient function apis and allow for composing whatever more is needed for the project
We resort to runtime type assertions [1] which works well for this and all other i/o; runtime type assertions are necessary for cases when your running service is incorrectly attached to old or future remote schema/api (there are other protections against it but still happens).
[0] https://github.com/appliedblockchain/tsql
[1] https://github.com/appliedblockchain/assert-combinators
Unsurprisingly, Haskell can also do this via Template Haskell [3], but I haven't used it.
[1] https://github.com/demetrixbio/FSharp.Data.Npgsql
[2] https://github.com/Zaid-Ajaj/Npgsql.FSharp.Analyzer
[3] https://hackage.haskell.org/package/postgresql-typed
Eg. this thing:
blah.select(['pet.name as pet_name'])
is inferred to return a type with a `pet_name` field, parsing the contents of the SQL expression?
[Update]: whatafak lol TS has way more juice in it than I realized: https://github.com/koskimas/kysely/blob/master/src/parser/se...
https://www.typescriptlang.org/docs/handbook/2/template-lite...
https://github.com/nikeee/sequelts
Building this parser is pretty cumbersome and supporting multiple SQL dialects would be lots of pain. While I'm not a fan of query builders per se, Kysely pretty much covers everything that my POC tried to cover (except that 0 runtime overhead). However, you get the option to use different DBMs in tests than in production (pg in prod, sqlite in tests), which is a huge benefit for a lot of people. sequelts was designed to work with sqlite only. And it's a hack.
I also made a tool (https://github.com/vramework/schemats) that generates the types directly from the db, which means whenever you do a DB migration your database types automatically update. Was forked from the original schemats library a couple years ago.
I also created a lightweight library ontop of pg that is less of a query builder and more of a typed CRUD + SQL for non trivial queries (https://github.com/vramework/postgres-typed). Most queries I deal with in a day to day is usually crud so I find it a little easier, but it's much less powerful then Kysely! I fall more into the camp of writing complex queries in SQL with small helpers and writing simple ones with util functions and typescript.
Edit: Will be looking into cleaning up docs and tests next month. Right now everything is in the ReadMe and examples
The TypeScript integration is nice too, I also have treated TS this way as “programmable autocomplete for VS Code.” I will say that doesn't make it super maintainable usually but that's not an issue for the 0.x.x releases of course.
In Rust, there is sqlx which lets you write SQL but checks at compile time whether the SQL is valid for the database, by connecting to the database, performing the transaction then rolling back, picking up and displaying any errors along the way.
Now with Prisma, I like it since it provides one unified database schema that I can commit into git (which avoids the problem of overlapping migrations from team members simultaneously working on separate branches that then need to be merged back in; with a git compatible schema, we must handle merge conflicts) and be able to transport across languages. I recently ported a TypeScript project to Rust and the data layer at least was very easy due to this. I used Prisma Client Rust for it, which is the beauty of having a language agnostic API, you can generate your own clients in whatever language you want.
Kysely and Knex are far more flexible for writing complex queries and don't get in your way.
If you:
- have used/liked Knex (or similar querybuilders) before
- like the TS integration + type safety of Prisma
- but find Prisma to be a bit too magic/heavy with its query engine and schema management
- and/or just want to be closer to SQL
then Kysely is what you're looking for.
I know my way around a database, but I'd rather not leave my code editor whenever I need to add a new column to a table.
With Kysely you have to create the DB schema, and then write the types; with every change you need, you gotta do both again.
(At least this seems true by default; as the project's readme mentions, there is a code generator[1] to generate the types from the DB schema; not quite the same but at least it's better than nothing.)
Commenters say that writing complex queries with Kysely is easier, which makes me wonder if I could use Prisma except for those, since Kysely should be able to just generate the SQL query for me to handle to Prisma...
[1]: https://github.com/RobinBlomberg/kysely-codegen
Prisma is very very nice, it has maybe the best DX out there. My issue with it is performance. It is much slower than query builders or raw sql. It's also a huge black box, although I haven't had any issues, you're dealing with a complex beast.
Knex (and other query builders) is nicer than raw sql, has good performance, and it's fairly transparent (it's not a 800 pound gorilla like Prisma)
I still have Prisma running on my projects, so it will be a bit hard to move now particularly because it has TS native migrations, which is another issue. If I wanted to use these outside of TypeScript (let's say another service or middleware), then it would be very hard.
[1] https://safeql.dev/compatibility/prisma.html
It is also possible but awkward to use interpolations to construct very dynamic queries where based on filter conditions you need different joins or unions etc. I looked at some of the solutions that infer ts types from sql queries but eventually felt it was more maintenable to keep the dynamic query generation on the ts ide
I found ts-sql-query [1] to be much better suited for my use cases. It is very feature rich and has very good support for various dialect specific features in all mainstream databases. Also Juan (the author) is very helpful with queries and suggestions.
[1] https://ts-sql-query.readthedocs.io/en/stable/