Readit News logoReadit News
stephen commented on Dagger: Define software delivery workflows and dev environments   dagger.io/... · Posted by u/ahamez
shykes · 11 days ago
Could you share an example of aws-cdk code that you think Dagger should take inspiration from? Dagger and aws-cdk work very differently under the hood, so it's difficult to make an apples-to-apples comparison. If there's a way to make Dagger more TS-native without sacrificing other important properties of Dagger, I'm interested. Thanks.
stephen · 10 days ago
Hello! Yeah, I totally get Dagger is more "hey client please create a DAG via RPC calls", but just making something up in 30 seconds, like this is what I had in mind:

https://gist.github.com/stephenh/8c7823229dfffc0347c2e94a3c9...

Like I'm still building a DAG, but by creating objects with "kinda POJOs" (doesn't have to be literally POJOs) and then stitching them together, like the outputs of 1 construct (the build) can be used as inputs to the other constructs (tests & container).

stephen commented on Dagger: Define software delivery workflows and dev environments   dagger.io/... · Posted by u/ahamez
esafak · 12 days ago
They have SDKs in many languages, not just Go. I use the python one. And they use code, not a DSL.
stephen · 12 days ago
Right, my point is that this:

https://docs.dagger.io/cookbook/services?sdk=typescript

Still looks like "a circa-2000s Java builder API" and doesn't look like pleasant / declarative / idiomatic TypeScript, which is what aws-cdk pulled off.

Genuinely impressively (imo), aws-cdk intermixes "it's declarative" (you're setting up your desired state) but also "it's code" (you can use all the usual abstractions) in a way that is pretty great & unique.

stephen commented on Dagger: Define software delivery workflows and dev environments   dagger.io/... · Posted by u/ahamez
stephen · 12 days ago
I thought Dagger had/has a lot of potential to be "AWS-CDK for CI pipelines".

I.e. declaratively setup a web of CI / deployment tasks, based on docker, with a code-first DSL, instead of the morass of copy-pasted (and yes orbs) CircleCI yaml files we have strewn about our internals repos.

But their DSL for defining your pipelines is ... golang? Like who would pick golang as "a friendly language for setting up configs".

The underlying tech is technically language-agnostic, just as aws-cdk's is (you can share cdk constructs across TypeScript/Python), but it's rooted in golang as the originating/first-class language, so imo will never hit aws-cdk levels of ergonomics.

That technical nit aside, I love the idea; ran a few examples of it a year or so ago and was really impressed with the speed; just couldn't wrap my around "how can I make this look like cdk".

stephen commented on Show HN: LinkedQL – Live Queries over Postgres, MySQL, MariaDB   github.com/linked-db/link... · Posted by u/phrasecode
phrasecode · 12 days ago
Great questions — happy to clarify how deployment and lifecycle work today.

Let me begin by answering: what exactly is this engine? It's simply a computation + cache layer that lives in the same process as the calling code, not a server on its own.

Think of a LinkedQL instance (new PGClient()) and its concept of a "Live Query" engine as simply a query client (e.g. new pg.Client()) with an in-memory compute + cache layer.

---

1. Deployment model (current state)

The Live Query engine runs as part of your application process — the same place you’d normally run a Postgres/MySQL client.

For Postgres, yes: it uses one logical replication slot per LinkedQL engine instance. The live query engine instantiates on top of that slot and uses internal "windows" to dedupe overlapping queries, so 500 queries that are only variations of "SELECT * FROM users" still map to one main window; and 500 of such "windows" still run over the same replication slot.

The concept of query windows and the LinkedQL inheritance model is fully covered here: https://linked-ql.netlify.app/engineering/realtime-engine

---

2. Do all live queries “live” on one machine?

As hinted at above, yes; each LinkedQL instance (new PGClient()) runs on the same machine as the running app (just as you'd have it with new pg.Client()) – and maps to a single Live Query engine under the hood.

  That engine uses a single replication slot. You specify the slot name like:

  new PGClient({ ..., walSlotName: 'custom_slot_name' }); // default is: "linkedql_default_slot" – as per https://linked-ql.netlify.app/docs/setup#postgresql

  A second LinkedQL instance would require another slot name:
  
  new PGClient({ ..., walSlotName: 'custom_slot_name_2' });
We’re working toward multi-instance coordination (multiple engines sharing the same replication stream + load balancing live queries). That’s planned, but not started yet.

---

3. Lifecycle of live queries

The Live Query engine runs on-demand and not indefinitely. It begins to exist when at least one client subscribes ({ live: true }) and effectively cleans up and disappears the moment the last subscriber disconnects (result.abort()). Calling client.disconnect() also ends all subscriptions and does clean up.

---

4. Deployments / code changes

Deploying new code doesn’t require “migrating” live queries.

When you restart the application:

• the Live Query starts on a clean slate with the first subscribing query (client.query('...', { live: true })).

• if you have provided a persistent replication slot name (the default being ephemeral), LinkedQL moves the position to the slot's current position and runs from there.

In other words: nothing persists across deploys; everything starts clean as your app starts.

---

5. Diagram / docs

A deployment diagram is a good idea — I’ll add one to the docs.

---

Well, I hope that helps — and no worries about the questions. This space is hard, and happy to explain anything in more detail.

stephen · 12 days ago
Thanks for the reply! That all makes sense!

As a potential user, I'd probably be thinking through things like: if I have a ~small-fleet of 10 ECS tasks serving my REST/API endpoints, would I run `client.query`s on these same machines, or would it be better to have a dedicated pool of "live query" machines that are separate from most API serving, so that maybe I get more overlap of inherited queries.

...also I think there is a limit on WAL slots? Or at least I'd probably want not each of my API servers to be consuming their own WAL slots.

Totally makes sense this is all "things you worry about later" (where later might be now-/soon-ish) given the infra/core concepts you've got working now -- looking really amazing!

stephen commented on Show HN: LinkedQL – Live Queries over Postgres, MySQL, MariaDB   github.com/linked-db/link... · Posted by u/phrasecode
stephen · 13 days ago
Can you description the deployment setup, somewhere in the docs/maybe with a diagram?

I get this is a backend library, which is great, but like does it use postgres replication slots? Per the inherited queries, do they all live on 1 machine, and we just assume that machine needs to be sufficiently beefy to serve all currently-live queries?

Do all of my (backend) live-queries live/run on that one beefy machine? What's the life cycle for live-queries? Like how can I deploy new ones / kill old ones / as I'm making deployments / business logic changes that might change the queries?

This is all really hard ofc, so apologies for all the questions, just trying to understand -- thanks!

stephen commented on Why frozen test fixtures are a problem on large projects and how to avoid them   radanskoric.com/articles/... · Posted by u/amalinovic
stephen · 17 days ago
These two suggestions are fine, but I don't think they make fixtures really that much better--they're still a morass of technical debt & should be avoided at all costs.

The article doesn't mention what I hate most about fixtures: the noise of all the other crap in the fixture that doesn't matter to the current test scenario.

I.e. I want to test "merge these two books" -- great -- but now when stepping through the code, I have 30, 40, 100 other books floating around the code/database b/c "they were added by the fixture" that I need to ignore / step through / etc. Gah.

Factories are the way: https://joist-orm.io/testing/test-factories/

stephen commented on Toucan Wireless Split Keyboard with Touchpad   shop.beekeeb.com/products... · Posted by u/tortilla
jwpapi · 2 months ago
One question, is it weird when the trackpoint is used with the thumb instead of the pointing finger?
stephen · a month ago
I still use my index finger; I've just gotten used to moving my hand ~slightly over from j to the nub.

I would definitely prefer their trackpoint module be "flipped upside down" so the nub was on top, directly next to the H key, so I could move "just the index finger", and not my palm, but it's really not a big deal now that I'm used to it.

They seem to get this feedback a lot, b/c they have an FAQ entry about (nub location), which asserts the current thumb location is due to space/engineering constraints. But, dunno, I kinda wonder if that was for the smaller UHK60? B/c just looking at my UHK80, it really seems like the nub could be by the H if they wanted it to. :-)

So not "perfect perfect" but still really amazing imo, and so glad I switched over -- I'm like 10 years late to split keyboards, custom layers for movement / programming binds, everything the cool kids have been doing forever, but I couldn't give up a trackpoint. But here we are, finally! :-)

(Also fwiw I held off on the UHK80 for about a year b/c they were having firmware issues on initial release, repeated/missed keys, that sort of thing, but its been rock solid for me; literally zero issues.)

stephen commented on Toucan Wireless Split Keyboard with Touchpad   shop.beekeeb.com/products... · Posted by u/tortilla
dandersch · 2 months ago
It's a shame that trackpoints never caught on outside of the thinkpad crowd. I rarely see them get used for custom keyboards, even though they are IMO the perfect fit for a use case like this.
stephen · 2 months ago
The UHK80 has a trackpoint module that works great!
stephen commented on Why we migrated from Python to Node.js   blog.yakkomajuri.com/blog... · Posted by u/yakkomajuri
sthuck · 2 months ago
I'm using it for a hobby project, and pretty pleased.

My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.

stephen · 2 months ago
Everyone's definition of "production quality" is different :-), but Joist is a "mikro-ish" (more so ActiveRecord-ish) ORM that has a few killer features:

https://joist-orm.io/

Always happy to hear feedback/issues if anyone here would like to try it out. Thanks!

stephen commented on Pipelining in psql (PostgreSQL 18)   postgresql.verite.pro/blo... · Posted by u/tanelpoder
porsager · 2 months ago
Thanks a lot. You're spot on about issue triage etc. I haven't had the time to keep up, but I read all issues when they're created and deal with anything critical. I'm using Postgres.js myself in big deployments and know others are too. The metrics branch should be usable, and I could probably find time to get that part released. It's been ready for a while. I do have some important changes in the pipeline for v4, but won't be able to focus on it until December.
stephen · 2 months ago
Great to hear you're using postgres.js in prod/large deployments! That sort of real-world-driven usage/improvements/roadmap imo leads to the best results for open source projects.

Also interesting about a potential v4! I'll keep lurking on the github project and hope to see what it brings!

u/stephen

KarmaCake day1663January 19, 2008
About
stephen.haberman@gmail.com draconianoverlord.com (the domain name was available :-)) https://joist-orm.io/
View Original