Readit News logoReadit News
zknill commented on Ask HN: How do you automate recurring workflows without writing glue code?    · Posted by u/kimzhang
zknill · a month ago
This reads like a bit of a smell. I'd be pretty suspicious of why you have tasks and reminders in the first place. The question also reads like the volume of these tasks and reminders is large enough to be a problem.

You should check out the "toil" section of the Google SRE book

https://sre.google/sre-book/eliminating-toil/

> If a human operator needs to touch your system during normal operations, you have a bug.

zknill commented on Pgactive: Postgres active-active replication extension   github.com/aws/pgactive... · Posted by u/ForHackernews
gritzko · a month ago
So the outcomes are essentially random?

It all feels like they expect developers to sift through the conflict log to resolve things manually or something. If a transaction did not go through on some of the nodes, what are the others doing then? What if they can not roll it back safely?

Such a rabbit hole.

zknill · a month ago
Typically applications will have some kind of logical separation of the data.

Given this is targeted at replication of postgres nodes, perhaps the nodes are deployed across different regions of the globe.

By using active-active replication, all the participating nodes are capable of accepting writes, which simplifies the deployment and querying of postgres (you can read and write to your region-local postgres node).

Now that doesn't mean that all the reads and writes will be on conflicting data. Take the regional example, perhaps the majority of the writes affecting one region's data are made _in that region_. In this case, the region local postgres would be performing all the conflict resolution locally, and sharing the updates with the other nodes.

The reason this simplifies things, is that you can treat all your postgres connections as-if they are just a single postgres. Writes are fast, because they are accepted in the local region, and reads are replicated without you having to have a dedicated read-replica.

Ofc you're still going to have to design around the conflict resolution (i.e. writes for the same data issued against different instances), and the possibility of stale reads as the data is replicated cross-node. But for some applications, this design might be a significant benefit, even with the extra things you need to do.

zknill commented on Pgactive: Postgres active-active replication extension   github.com/aws/pgactive... · Posted by u/ForHackernews
zknill · a month ago
Looks like it uses Postgres Logical replication to share changes made on one postgres instance to another. Conflict resolution is last-write-wins based on timestamp. Conflicting transactions are logged to a special table (pgactive_conflict_history), so you can see the history, resolve, etc.

https://github.com/aws/pgactive/tree/main/docs

zknill commented on London's Heathrow Airport announces complete shutdown due to power outage   cnn.com/2025/03/20/travel... · Posted by u/dfine
gambiting · 5 months ago
Well....I have a flight to LHR tomorrow morning, I'm guessing I should start looking at alternatives now :P
zknill · 5 months ago
IANAL but, you should be in contact with your airline about your specific flight.

You are entitled to be re-booked on the next available flight or get a refund. If you take a refund the airline has no obligation to you anymore. You might find after taking a refund, the price of an equivalent flight is now much more.

zknill commented on Microsoft open sources PostgreSQL extensions   theregister.com/2025/02/1... · Posted by u/beardyw
zknill · 7 months ago
> A spokesperson at MongoDB said: "The rise of MongoDB imitators proves our document model is the industry standard. But bolting an API onto a relational database isn't innovation – it's just kicking the complexity can down the road. These 'modern alternatives' come with a built-in sequel: the inevitable second migration when performance, scale, and flexibility hit a wall."

I think the reason that a there are so many MongoDB wire compatible projects (like this postgres extension from microsoft, and ferretdb) is because people have systems that have MongoDB clients built into their storage layers but don't want to be running on MongoDB anymore, exactly because "performance, scale, and flexibility hit a wall".

If you can change the storage engine, but keep the wire protocol, it makes migrating off Mongo an awful lot cheaper.

zknill commented on Patterns for Building Realtime Features   zknill.io/posts/patterns-... · Posted by u/zknill
martinsnow · 7 months ago
How do you handle deployments of realtime back ends which needs state in memory?
zknill · 7 months ago
The other commenters have mentioned doing deploys behind a proxy, which is fine, but eventually you're going to have to re-deploy the component that terminates the client connections (i.e. websocket server or SSE, etc).

From the client's perspective, there's not a lot of difference between the server dropping the connection (on redeploy) or the connection being dropped for some other transient reason.

That is to say, with a decent client side handling of connection state you just incrementally rollout your new servers and each server terminates its connections triggering reconnects from the clients.

The hardest part is often maintaining continuity on some stream of events. That is; picking up exactly where you were before the connection dropped. You need some mechanism for the client to report the event it last received, and some way to "rewind" back to that point on the stream.

zknill commented on Patterns for Building Realtime Features   zknill.io/posts/patterns-... · Posted by u/zknill
jtwaleson · 7 months ago
I'm building a simple version with horizontally scalable app servers that each use LISTEN/NOTIFY on the database. The article says this will lead to problems and you'll need PubSub services, but I was hoping LISTEN/NOTIFY would easily scale to hundreds of concurrent users. Please let me know if that won't work ;)

Some context: The use case is a digital whiteboard like Miro and the heaviest realtime functionality will be tracking all of the pointers of all the users updating 5x per second. I'm not expecting thousands/millions of users as I'm planning on running each instance of the software on-prem.

zknill · 7 months ago
I work on the team that built a Postgres pub/sub connector at Ably (we call it LiveSync).

Our postgres connector also works on LISTEN/NOTIFY and has horizontally scalable consumers that will share the load.

There's two types of LISTEN/NOTIFY you can use. Either have the notify event carry the payload or have the notify event tell you of a change and then query for the data. If you choose the second option you'll get much better resilience under load, as the back pressure is contained in the table rather than dropped on NOTIFY.

If you do go with a poll-on-change design, you'll likely benefit from some performance tuning around denouncing the polls and how big a batch size of records to poll for.

As for the exact collaboration features, when clients are online and interacting, it's fairly easy. Lots of the hard stuff comes from knowing when a connection is dropped or when a client goes away. Detecting this and updating the other clients can be hard.

Another team at Ably worked on that problem, and called it Spaces.

zknill commented on Go Data Structures: Interfaces (2009)   research.swtch.com/interf... · Posted by u/rednafi
zknill · 7 months ago
A quirk of Go is that I can cast a `[]string` to `interface{}`, but I cannot cast `[]string` to `[]interface{}`. This blog post is my go-to explanation for why the second is not possible but the former is.

u/zknill

KarmaCake day494May 5, 2016
About
personal site: https://zknill.io

hn@zak.knill.dev

View Original