Readit News logoReadit News
NightMKoder commented on The fix for a segfault that never shipped   recall.ai/blog/the-fix-fo... · Posted by u/davidgu
NightMKoder · 2 months ago
This might speak to the craziness of the gstreamer plugin ecosystem - good/bad/ugly might be a fun maintenance mnemonic, but `voaacenc` is actually in `bad` - not `ugly`. Most plugins you'd want to use aren't in `good`. How are you supposed to actually use "well supported plugins" with gstreamer? Is it just to not use gstreamer at all?
NightMKoder commented on Show HN: Attempt – A CLI for retrying fallible commands   github.com/MaxBondABE/att... · Posted by u/maxbond
NightMKoder · 6 months ago
I was recently in the market for one of these! I ended up going with https://github.com/dbohdan/recur due to the nice stdout and stdin handling. Though this has stdout/stderr pattern matching for failures which is nice too!
NightMKoder commented on Postgres LISTEN/NOTIFY does not scale   recall.ai/blog/postgres-l... · Posted by u/davidgu
NightMKoder · 8 months ago
Facebook’s wormhole seems like a better approach here - just tailing the MySQL bin log gets you commit safety for messages without running into this kind of locking behavior.
NightMKoder commented on Mermaid: Generation of diagrams like flowcharts or sequence diagrams from text   github.com/mermaid-js/mer... · Posted by u/olalonde
NightMKoder · 10 months ago
IMO mermaid is awesome, but for two somewhat indirect reasons:

- There’s an almost wysiwig editor for mermaid at https://www.mermaidchart.com/play . It’s very convenient and appropriately changes the layout as you draw arrows!

- Notion supports inline mermaid charts in code blocks (with preview!) It’s awesome for putting some architecture diagrams in Eng docs.

NightMKoder commented on Refactoring Clojure   orsolabs.com/post/refacto... · Posted by u/luu
sammy0910 · 10 months ago
most people I know eschew the use of with-redefs for testing because it's hard to verify that the testing environment is configured correctly as the codebase changes (but otherwise I second the points about immutability by default, and static/pure functions!)
NightMKoder · 10 months ago
Agreed - concretely with-redefs forces single threaded test execution. So eg you can’t use the eftest multithreaded mode.

Explicit dynamic bindings are better if you need something like this since those are thread local.

NightMKoder commented on Refactoring Clojure   orsolabs.com/post/refacto... · Posted by u/luu
NightMKoder · 10 months ago
Usually the controversial decision for Clojure code highlighting is rainbow parens. This color scheme is horrific and unreadable (on mobile at least).
NightMKoder commented on Air pollution fell substantially as Paris restricted car traffic   washingtonpost.com/climat... · Posted by u/perihelions
azinman2 · a year ago
Wonder if anyone is working on ways for breaks and tires to be less harmful, or polluting?
NightMKoder · a year ago
I don’t know about tires, but for brakes we already know how to make lower dust brakes - use drum brakes instead of disc brakes. The friction material is enclosed on drum brakes so much less of it just flies away.
NightMKoder commented on Zero-Downtime Kubernetes Deployments on AWS with EKS   glasskube.dev/blog/kubern... · Posted by u/pmig
paranoidrobot · a year ago
We have a number of concurrent issues.

We don't want to kill in-flight requests - terminating while a request is outstanding will result in clients connected to the ALB getting some HTTP 5xx response.

The AWS ALB Controller inside Kubernetes doesn't give us a nice way to specifically say "deregister this target"

The ALB will continue to send us traffic while we return 'healthy' to it's health checks.

So we need some way to signal the application to stop serving 'healthy' responses to the ALB Health Checks, which will force the ALB to mark us as unhealthy in the target group and stop sending us traffic.

SIGUSR1 was an otherwise unused signal that we can send to the application without impacting how other signals might be handled.

NightMKoder · a year ago
So I might be putting words in your mouth, so please correct me if this is wrong. It seems like you don’t actually control the SIGTERM handler code. Otherwise you could just write something like:

  sigterm_handler() {
    make_healthcheck_fail();
    sleep(20);
    stop_web_server();
    exit(0);
  }
Technically the server shutdown at the end doesn’t even need to be graceful in this case.

NightMKoder commented on Zero-Downtime Kubernetes Deployments on AWS with EKS   glasskube.dev/blog/kubern... · Posted by u/pmig
paranoidrobot · a year ago
We had to figure this out the hard way, and ended up with this approach (approximately).

K8S provides two (well three, now) health checks.

How this interacts with ALB is quite important.

Liveness should always return 200 OK unless you have hit some fatal condition where your container considers itself dead and wants to be restarted.

Readiness should only return 200 OK if you are ready to serve traffic.

We configure the ALB to only point to the readiness check.

So our application lifecycle looks like this:

* Container starts

* Application loads

* Liveness begins serving 200

* Some internal health checks run and set readiness state to True

* Readiness checks now return 200

* ALB checks begin passing and so pod is added to the target group

* Pod starts getting traffic.

time passes. Eventually for some reason the pod needs to shut down.

* Kube calls the preStop hook

* PreStop sends SIGUSR1 to app and waits for N seconds.

* App handler for SIGUSR1 tells readiness hook to start failing.

* ALB health checks begin failing, and no new requests should be sent.

* ALB takes the pod out of the target group.

* PreStop hook finishes waiting and returns

* Kube sends SIGTERM

* App wraps up any remaining in-flight requests and shuts down.

This allows the app to do graceful shut down, and ensures the ALB doesn't send traffic to a pod that knows it is being shut down.

Oh, and on the Readiness check - your app can use this to (temporarily) signal that it is too busy to serve more traffic. Handy as another signal you can monitor for scaling.

e: Formatting was slightly broken.

NightMKoder · a year ago
Why the additional SUGUSR1 vs just doing those (failing health, sleeping) on SIGTERM?
NightMKoder commented on Zero-Downtime Kubernetes Deployments on AWS with EKS   glasskube.dev/blog/kubern... · Posted by u/pmig
Detrytus · a year ago
More likely they mean "readiness check" - this is the one that removes you from the Kubernetes load balancer service. Liveness check failing does indeed cause the container to restart.
NightMKoder · a year ago
Yes sorry for not qualifying - that’s right. IMO the liveness check is only rarely useful - but I've not really run any bleeding edge services on kube. I assume it’s more useful if you actually working on dangerous code - locking, threading, etc. I’ve mostly only run web apps.

u/NightMKoder

KarmaCake day391September 1, 2014View Original