Readit News logoReadit News
nikeee · 3 years ago
I'm not a fan of the idea of leveraging TS types at runtime. This is just a lock-in to TS, even if type annotations become a thing in JS. I don't like ORMs that use runtime types either. Most of the time, I want to write raw SQL.

So as an experiment, I created a library that statically types raw SQL:

https://github.com/nikeee/sequelts

The idea is to parse the SQL queries using TS's type system. The parsed query is combined with the database schema and therefore, we know what type the query will return.

This is especially useful due to TS's structural type system. It's also zero-overhead due to the entire functionality being just TS types, which vanish as soon as we compile. It therefore also works JS-only.

However, it's just a proof of concept. I'm working on an ANTLR target for TS types, so that the SQL parser can be generated. A game changer will also be the integration with sql-template-tags [1] (which would make this actually usable).

This is just for selecting data. Time will tell if it's feasible to also type-check mutating queries.

The primary use-cases will target SQLite/D1.

[1]: https://www.npmjs.com/package/sql-template-tag

throwaway858 · 3 years ago
Here is another TypeScript library for validating types of SQL queries: https://github.com/MedFlyt/mfsqlchecker

It uses the actual PostgreSQL parser against your project's database schema to give 100% accurate results. Guarantees that all of your queries are valid, and that all input and output types are correct (and will also auto-generate/quickfix the output types for you in VSCode)

nikeee · 3 years ago
This is very cool. I like that it doesn't use a build process nor code generation. And especially that the project doesn't need to re-implement parsing and type inference.
seer · 3 years ago
Hah a few months back I released https://github.com/ivank/potygen

Similar idea - statically typing queries. Mine was mostly me playing around with recursive decent parsers seeing if I can actually parse SQL with it, and it seems to work OK, at least for Postgres.

It does require a "build" pass to generate the types, but I've added some additional bells and whistles like auto formatters and on hover info about columns / views / tables etc. Once you have the sql AST its pretty easy to build a lot of cool stuff around it.

It's all pure TS and doesn't use parser generators like ANTLR. We've been using it in prod for a while now and seem to be working alright - its mostly types anyway, but does have a runtime component, since sql parameters in node Postgres are too basic for our use case.

It all started from the amazing https://github.com/dmaevsky/rd-parse which showed me that you could build crazy complex parsers with like 100-200 lines of code.

norman784 · 3 years ago
I don't agree with this being a cons

> Requires knowledge about SQL & your database

For me that's a pros, because it is transferable knowledge.

And

> No type safety

Is a bummer, if you could check at compile time or some other way that your queries are valid it will be cool, in rust is a crate that does exactly that sqlx[0], and besides sql being verbose I found so easy and enjoyable to work with, it's so easy to know exactly what the query does, with ORM's is easy to have a query that's hard to know what does and the only way to be sure is running and printing the query.

[0] https://crates.io/crates/sqlx

ntonozzi · 3 years ago
PgTyped is another high quality alternative: https://github.com/adelsz/pgtyped.
nickreese · 3 years ago
Using PGTyped daily. Works well. Wish nullable columns could be coalesced though.
gmac · 3 years ago
Looks like a great start.

My own library Zapatos[1] is less ambitious (it doesn't try to parse raw SQL) but has similar goals. It also has no truck with runtime types.

[1] https://jawj.github.io/zapatos/

gunshowmo · 3 years ago
I've used Zapatos and can vouch for its effectiveness. It doesn't bind you into any specific way of setting up your database or defining models, but provides an extremely easy way to integrate with other TS libraries in order to get typed queries.

Thank you for your incredible work on this.

phpnode · 3 years ago
author of ts-sql[0] here, this looks great (and a way more practical approach!)

[0] https://github.com/codemix/ts-sql

chrisjc · 3 years ago
This is very slick!

> Time will tell if it's feasible to also type-check mutating queries.

What do you mean by "mutating queries"? When the underlying schema changes? For instance when a new column is added to the `video` table, or if `video.id` changes to a varchar?

Haven't dived in deep enough yet, but is this using the DB engine's explain query, or I guess my question is how does it interrogate the underlying db/schema if the schema isn't supplied as code in the setup? Or at least, how would you envision all that happening?

nikeee · 3 years ago
By mutating queries I thought of UPDATE, INSERT and DELETE.

It should be possible to parse those and check if they obey the schema, so raw SQL that updates/inserts/deletes data could also be checked.

czx4f4bd · 3 years ago
Wow, I was literally thinking about this like a week ago, wondering how viable it would be. This is great work. The ability to compile ANTLR to TypeScript types would potentially be a game-changer.
phpnode · 3 years ago
I just want to say that this looks fantastic, especially the runtime types stuff - the idea that types are values hasn't caught on yet but it's great to see movement in this area. A long time ago I did something very similar to this for Flow (https://gajus.github.io/flow-runtime/#/try) but typescript is a much more suitable target. Congratulations on an amazing job, I really hope you have plans to sustain this project for a long time, because it's very ambitious but very compelling.
szines · 3 years ago
Great to see a new option in the TypeScript ecosystem. A fully featured framework. Brilliant. Could be a great alternative to Adonis.js which is one of my favourite nowadays: https://adonisjs.com/
lloydatkinson · 3 years ago
TypeScript's type system can be very powerful and flexible - creating types at build time in a manner of speaking. Here's an article I wrote on an aspect of this called mapped types. You can see how types can be composed of other types.

https://www.lloydatkinson.net/posts/2022/going-further-with-...

alephnan · 3 years ago
This project is attempting to boil the ocean but I'm skeptical it can do each of these things well, considering information the dozen listed libraries don't have documentation. Including:

- D.I. https://deepkit.io/documentation/injector

- Topsort "A fast implementation of a topological sort/dependency resolver with type grouping algorithm" https://deepkit.io/documentation/topsort

- Templates https://deepkit.io/documentation/template

I'll stop there, I'm too lazy to do their work for them. Other red flags

- why is a very specific sorting algorithm bundled in?

- its claimed their RPC implementation is 8x faster than gRPC, specifically gRPC-js. Don't you think Google has the resources and the incentive to make gRPC faster? It's likely this is not an apples-to-apples benchmark.

- MongoDB ORM. It's a fine database, but it's also proxy indicator of web developers who only breath JavaScript. - Message broker. Very opaque, but it's likely this is coupled to a very specific pubsub provider.

- API console for testing HTTP calls. This is a solved problems with tools for testing APIs like Postman. The fact that the team is trying to solve all sorts of problems, each of which is standalone product, makes feel they are trigger happy at reinventing the wheel.

- For the view layer, it seems hard-coded to Angular. That's probably a no-go unless you don't care about what component library framework you have to use.

This project seems to be a single author project. Kudos for being ambitious, but I would heed caution at trying to solve every problem all at once. Maybe lean into the ORM aspect and make that best in class first.

Vinnl · 3 years ago
> MongoDB ORM. It's a fine database, but it's also proxy indicator of web developers who only breath JavaScript.

Yet the author appears to be writing the parser in C++: https://mobile.twitter.com/MarcJSchmidt/status/1534323199147...

vosper · 3 years ago
> considering information the dozen listed libraries don't have documentation.

I don’t think this is a “Show HN” or “Launch HN” posting. It’s possible the author is still working on code and docs, and has no idea the site’s been posted here.

marcjschmidt · 3 years ago
That's right. It's still being worked on, so a lot of documentation doesn't exist yet. But work is also being done in parallel on a book/new docs that covers all areas.
christophilus · 3 years ago
It’s an in-progress project. Regarding topsort, my guess is the algorithm was useful for some part of the project, and the author decided to make it available generally.

> Don't you think Google has the resources and the incentive to make gRPC faster?

You’d think that, but A) Gmail, and B) having worked for a Google competitor, I can assure you a passionate developer can write more performant code than what a massive corporation will generally produce.

> I would heed caution at trying to solve every problem all at once.

Agreed, but I hope the author throws caution to the wind. I’m tired of having either A) 1 million npm dependencies or B) boiling the ocean in my own projects, so it’d be nice to have a batteries included project you can rely on.

marcjschmidt · 3 years ago
Yes, it covers a lot. But that is in my eyes necessary to provide high performance throughout the whole application and is necessary to offer something that is fundamental unique. It's a matter of providing very fast basic building blocks to get that. For example, you can not make ORMs faster without first having a very fast deserialiser, or a fast RPC implementation without having very fast binary validation and encoding. So, it all works together, is decoupled and split up in multiple packages so they can be used separately.

A lot of stuff is not yet documented, but they will. We are already writing new documentation/book which is roughly 40% done that covers already a lot more than what is currently seen on the website, but this HN post was posted at a somewhat bad timing.

> - its claimed their RPC implementation is 8x faster than gRPC, specifically gRPC-js. Don't you think Google has the resources and the incentive to make gRPC faster? It's likely this is not an apples-to-apples benchmark.

Yes, gRPC is fast. But not gRPC-js. I invite you to test it yourself, all our benchmarks are in the git repository. It is explained here on how to run them: https://deepkit.io/documentation/benchmark. Feel free to join our discord so I can guide you through the process. On the reasons why gRPC-js is slower: It uses Proto Buffers which is binary, and parsing binary is notoriously slow in JavaScript. By utilising the runtime type information of TypeScript we can generate JIT binary encoder/decoder for very specific types which are much faster.

> - why is a very specific sorting algorithm bundled in?

Why is providing something as a dedicated package a red flag? It's fundamental necessary for an UoW ORM for example and itself very useful. My other topsort package in PHP has 4 million installations, so it's only logical to bring the same functionality to TypeScript.

> Message broker. Very opaque, but it's likely this is coupled to a very specific pubsub provider.

It uses its own server, but will probably be replaced soon. Again, somewhat bad timing of that post.

> - API console for testing HTTP calls. This is a solved problems with tools for testing APIs like Postman

It doesn't solve what Postman solves. It's a way to automatically document your HTTP and RPC API plus allows to execute/try them right in the browser. There is Swagger UI which comes closer to what API Console is.

> - For the view layer, it seems hard-coded to Angular

There is actually no view layer. It has a server-side DI-enabled template engine based on JSX and a desktop UI kit (which is based on Angular). Both have their own use-case and are not required.

stareatgoats · 3 years ago
This looks really interesting. The possibility to define runtime types basically removes the need to write data object validation blocks, greatly alleviates database sync etc. I'm sure I'm not the only one to have implemented runtime types on top of Typescript for this reason, albeit not at this low level and therefore incurring a certain performance penalty.

So performance would be one reason to chose Deepkit for me. The integrated framework looks interesting too. Something that make me hesitant is the relative small user base and size of the community (mainly one single developer?) considering this would basically be to commit to a new programming language. And if the Typescript team should decide to do anything similar then the project might be dead in the water?

ConsoleTVs · 3 years ago
I never understood the "single developer" argument. React itself does not even have more than 5 active developers (https://github.com/facebook/react/graphs/contributors), Vue literally have a single one (https://github.com/vuejs/vue/graphs/contributors). You can apply this to almost any library. I mean, what's that argument for?
stareatgoats · 3 years ago
It means that the project depends on the whims or life events of a single person - a single point of failure if you like: critical patches may not be merged, the maintainer might turn malicious etc. More people on the project means more eyeballs on the code. This seems self-evident to me (especially if the project is core in ones tech stack) so I'm really surprised at the "what's the argument for?". Am I missing something?

(I don't see how you can cite Vue as an example; Evan is the main contributor to Vue, but he is certainly not the only one.)

laurent123456 · 3 years ago
This seems very interesting and I feel such a cohesive framework is needed in JS/TS, as an alternative to cobble together various unrelated packages.

On the other hand, I wonder what will be the level of support for each of these libraries - just the ORM or desktop UI seem like a lot of work.

vcryan · 3 years ago
One man's cobbling together is a another's choosing the best library for the job ;)
oehpr · 3 years ago
How on earth did selecting coherent, focused, modularized libraries become taboo for so many programmers?

I will never understand. Nor sympathize.

vonnik · 3 years ago
I just want to put a word in for Marc Schmidt. He's a great engineer, with great product sense. This project will move fast, and if you think it's missing something essential, it's probably on the roadmap and will be realized quickly.
marcjschmidt · 3 years ago
thank you so much!
tekkk · 3 years ago
It looks really cool but I feel it's too much to take in a one bite. Making the framework do (almost) everything makes me think it won't be able to cover all the nuances of single-purpose libraries. I'd be much more open to trying it if it was just a general wiring around Fastify & Prisma for a better Node.js backend rather than a one tool to do everything.
RoryH · 3 years ago
They do have the option to just pick and [choose from a variety of libraries](https://deepkit.io/library). I like the look of @deepkit/type
marcjschmidt · 3 years ago
That is a non-goal. One of the biggest advantages of Deepkit and its runtime type feature is that you can reuse TypeScript types throughout your whole application stack. This does not work with Fastify and Prisma. Let me give you an example.

Let's say you build a little user management app and need a `User` type. So you define one:

  interface User {
    id: number;
    username: string;
    created: Date;
    email: string;
  }

If you want to build a REST API that returns that user, you can just do that:

  router.get('/user/:id', (id: number & Positive): User => {
    //...
  });

Deepkit automatically serialises the object into JSON using the type information in `User`.

You want to accept a User object for creating a user in the database? No problem:

  router.post('/user', (user: User): User => {
    //...
  });

You probably want to enrich the interface with additional constraints:

  interface User {
    id: integer & Positive;
    username: string & MinLength<3> & MaxLength<32>;
    created: Date;
    email: Email;
  }

And with that the framework automatically validates all incoming requests against the User object and its containing validation constraints. It also deserialises it automatically from JSON for example. If the `User` is a class, it instantiates a class instance.

But you don't want to require all fields, like id and created should be set by the server:

  router.post('/user', (user: Omit<User, 'id' |'created'>): User => {
    //...
  });

This works equally well. And the best part: you get API documentation for free with the Deepkit API Console, just like you know from Swagger UI.

Ok, but we need to save the User in our database, right? We do not need to duplicate the User in yet another language like with Prisma. Just re-use the User interface and annotate the fields.

  interface User {
    id: integer & PrimaryKey & AutoIncrement;
    username: string & MinLength<3> & MaxLength<32> & Unique;
    created: Date;
    email: Email & Unique;
  }

We still use the very same interface, but have added additional meta-data so you can use that exact same type now also for the ORM.

  router.get('/user/:id', async (id: number & Positive, db: Database): Promise<User> => {
    return await db.query<User>().filter({id}).findOne();
  });

Next, let's go to the frontend. We obviously have to request the data from our rest API. We can just do that and reuse the `User` interface once again.

  const userJson = await fetch('/user/2');
  const user = cast<User>(userJson);
`cast` automatically validates and deserialises the received JSON. We import `User` wherever needed and this works because the meta-data is not tightly coupled to any library, so you won't pull in for example ORM code just because you use database meta-data. You can literally use TypeScript's powerful type-system to your full advantage in many small to super complex use-cases and let the framework handle all the validation, serialisation, and database stuff.

And like that you are able to reuse types everywhere: From database to http router, configuration, RPC, API documentation, dependency injection, message broker, frontend, and more. That's not possible in this form in any other framework or when you combine lots of third-party libraries and glue them manually together.

Deepkit separated the functionality in libraries which would allow you to use their features in Fastify and Prisma, too, but that would mean you lose one of the biggest advantage of all: Reusing types.

erikpukinskis · 3 years ago
IMO reusing types is an anti-pattern. It leads to knots of functions which are all tightly coupled since they “use the same types” even though they actually only need small subsets of those types. Changing one of these functions will then often break several of the conjoined functions.

When you start to hear people say of their PRs “it works but I just have to fix the types”, that usually means the codebase is trying to re-use too many types. In my experience.

The beauty of TypeScript is that you can have two different very narrowly specified types and the compiler will tell you if they’re compatible or not.

Reusing types is throwing away the most valuable feature of TypeScript.

For me, the type signature is part of the function signature and therefore should not be re-used.

Imagine what a disaster it would be if function signatures were reusable:

    sig UserDatabase(user, db)

    create: UserDatabase =>
      db.write(“users”, user)

    update: UserDatabase =>
      db.get(“users”, user.id).update(user)

It’s too much. DRY can be taken too far.

tekkk · 3 years ago
Ok you got me sold on the idea - being able to configure just one schema for everything with validation and deserialization does sound amazing. And really appreciate the detailed reply!

While it does sound quite great I have to try it in practise to understand would it work for me. To prevent framework fatigue I really try not to switch frameworks too often which is why I'd like to be able to reuse what I know already. And your scope sounds really big. Yet I always have liked how TS/JS community keep pushing the envelope constantly!

rattray · 3 years ago
> Next, let's go to the frontend … const user = cast<User>(userJson);

Wait, how can Deepkit's `cast` work on the frontend? Do you compile to WASM?

Quickly skimming the current Deepkit homepage and intro blog post didn't seem to hint at this.

nerdponx · 3 years ago
This looks to me a lot like the FastAPI framework in Python. Thanks for the demo.

> This does not work with Fastify and Prisma

The point of Prisma is that it generates types for you from the Prisma schema, right? Are the generated types not interchangeable throughout the application?

m00dy · 3 years ago
one interesting thing is that it looks like the built-in debugger can also do static analysis and draw a call graph of the execution context.

[0]. https://deepkit.io/assets/screenshots/debugger-http-light.pn...