Zod is great in terms of API, but a no-go in terms of performance.
We ended up writing babel plugins (https://github.com/gajus/babel-plugin-zod/) and using it along with zod-accelerator, which improves performance, but breaks in various edge-cases.
Zod is great in terms of API, but a no-go in terms of performance.
We ended up writing babel plugins (https://github.com/gajus/babel-plugin-zod/) and using it along with zod-accelerator, which improves performance, but breaks in various edge-cases.
You need very fast search on a relatively small dataset. Your search patterns are simple and well-defined. You're already using Redis as your primary database. Low latency is critical.
Choose Elasticsearch when:
You have large amounts of text data. You need advanced search features and relevance tuning. Your data is semi-structured or unstructured. You need sophisticated analytics capabilities. Scalability is a primary concern
For someone that could be 1m records for others that could be 1bn records
> So, we now focus on the new NIST standards of FIPS 203 (ML-KEM), FIPS 204 (ML-DSA) and FIPS 205 (FLH-DSA).
Just so I understand the other side of the argument: where is the purported inefficiency of keeping DST coming from?
I mean it.
I've been parsing (not just validating) runtime values from a decade (io-ts, Zod, effect/schema, t-comb, etc) and I find the performance penalty irrelevant in virtually any project, either FE or BE.
Seriously, people will fill their website with Google tracking crap, 20000 libraries, react crap for a simple crud, and then complain about ms differences in parsing?
Zod alone accounts for a significant portion of the CPU time.