Readit News logoReadit News
crdrost commented on Show HN: I was curious about spherical helix, ended up making this visualization   visualrambling.space/movi... · Posted by u/damarberlari
pimlottc · 10 days ago
I was wondering about the “correctness” of the z-axis movement for the spherical helix. You could pick lots of different functions, including simple linear motion (z = c * t). This would obviously affect the thickness and consistency of the “peels”.

The equation used creates a visually appealing result but I’m wondering what a good goal would be in terms of consistency in the distance between the spirals, or evenness in area divided, or something like that.

How was this particular function selected? Was it derived in some way or simply hand-selected to look pleasing?

crdrost · 10 days ago
I think this particular function was selected because it happened to be convenient to program and the visual effect was pleasant enough.

The actual "correct" thing to do would probably be to have the point maintain constant speed in 3D space like a real boat sailing on a globe, right? But that's a rather bigger lift:

    const degrees = Math.PI / 180;
    const bearing = 5 * degrees; // or it might be 85 degrees? Not sure off the top of my head
    const k = Math.tan(bearing);
    const v = 0.001 // some velocity, adjust as needed
    const phi = (t) => v*t/Math.sqrt(1 + k*k) // the sqrt is not strictly needed
    const theta = (t) => k*Math.ln(Math.tan(phi(t)/2)) // this is the annoying one haha
with outputs,

    const x = (t) => Math.sin(phi(t)) * Math.cos(theta(t))
    const y = (t) => Math.sin(phi(t)) * Math.sin(theta(t))
    const z = (t) => Math.cos(phi(t))
I doubt that they did the ln(tan(phi/2)) thing though, but it's what you get when you integrate the k d{phi} = sin{phi} d{theta} equation that you have here.

crdrost commented on The Raft Consensus Algorithm (2015)   raft.github.io/... · Posted by u/nromiun
stmw · 14 days ago
Both the Raft algorithm and its explanation are excellent, including this little animated demo that Diego Ongaro (who is also a great guy) made to help explain his invention. While Paxos was first and still popular, I am not sure I would count against any senior engineer unable to explain it to others. With Raft, one ought to be able to do it. Great to see this on HN.
crdrost · 14 days ago
I just want to clarify that, strictly speaking, Paxos is simpler than Raft. Like Paxos is Raft plus "delete the part where the new leader has to be fully caught-up" (which causes an extra weirdness in Raft when the leader got majority adoption of a new log entry but failed to commit it due to network partition) plus "delete the part where the leader election elects the leader, you can do that and it's called multi-paxos, but if your use case isn't big enough you can instead just elect the next log entry, but it doesn't have to have the form "this is our new leader, all hail" (but in practice this is what everyone does).

I think the Raft paper is more approachable for newcomers, but if you are finding Paxos hard to explain and Raft easy to explain, just use the Raft lingo to explain Paxos.

crdrost commented on Why Elixir? Common misconceptions   matthewsinclair.com/blog/... · Posted by u/ahamez
crabmusket · a month ago
> combined with the "let it crash" ethos

I see this phrase around a lot and I wish I could understand it better, having not worked with Erlang and only a teeny tiny bit with Elixir.

If I ship a feature that has a type error on some code path and it errors in production, I've now shipped a bug to my customer who was relying on that code path.

How is "let it crash" helpful to my customer who now needs to wait for the issue to be noticed, resolved, a fix deployed, etc.?

crdrost · a month ago
I feel like the other comments I see here, are not expressing the deeper point, and you're asking to really understand something, so that's what you care about? So I'm sorry to pile on when you've got like a dozen good answers, but.

Let It Crash refers to a sort of middle ground between returning an error code and throwing an exception. It does not directly address your customer's need, and you are right that they are facing a bug.

So if you were to use Golang with Let It Crash ethos, say, you would write a lot of functions with the same template: they take an ID and a channel, they defer a call to recover from panics, and on panic or success they send a {pid int, success bool, error interface {}} to the channel -- and these are always ever run as goroutines.

Because this is how you write everything, you have some goroutines that supervise other goroutines. For example, auto-restart this other goroutine, with exponential backoff. But also the default is to panic every error rather than endless "if err != nil return nil, err" statements. You trust that you are always in the middle of such a supervisor tree and someone has already thought about what to do to handle uncaught errors. Because supervision trees is just the style of program that you write. Say you lose your connection to the database, it goes down for maintenance or something. Well the connection pool for the database was a separate go routine thread in your application, that thread is now in CrashLoopBackoff. But your application doesn't crash. Say it powers an HTTP server, while the database is down, it responds to any requests that do not use the database just fine, and returns HTTP 500 on all the requests that do use the database. Why? Because your HTTP library, allocates a new goroutine for every request it handles, and when those panic it by default doesn't retry and closes the connection with HTTP 500. Similarly for your broken codepath, it 500s the particular requests that x.(X) something that can't be asserted as an X, we log the error, but all other requests are peachy keen, we didn't panic the whole server.

Now that is different from the first thing that your parent commenter said to you, which is that the default idiom is to do something like this:

    type Message {
        MessageType string
        Args interface{}
        Caller chan<- Message
    }
    // ...
    msg := <-myMailbox
    switclMessageType {
    case "allocate":
      toAllocate := args.(int)
      if allocated[toAllocate 
        msg.Caller <- Message{"fail", fmt.Errorf(...), my mailbox}
      } else {
        // Save this somewhere, then
        msg.Caller <- Message{"ok", , my mailbox}
      }
    }
With a bit of discipline, this emulates Haskell algebraic data types which can give you a sort of runtime guarantee that bad code looks bad (imagine switching on an enum `case TypeFoo`: foo := arg.(Foo)`, if you put something wrong in there it is very easy to spot during code review because it's a very formulaic format)

So the idea is that your type assertions don't crash the program, and they are usually correct because you send everything like a sum type.

crdrost commented on Long Covid destroys teenage lungs in ways doctors never saw   rollingout.com/2025/06/03... · Posted by u/lnyan
TimorousBestie · 2 months ago
My cardiovascular health has never recovered from COVID, it’s been very depressing to go from low blood pressure and decent VO2 max to. . the opposite. Interval training has helped a little, but I dunno, I think it’s just gone.

Wish everyone had taken it more seriously.

crdrost · 2 months ago
We still can take it more seriously!

Here's a PDF handout from 2023 for handing to hospital admins, https://www.nerode.org/clean-air/medical%20clean%20air%20rel... . The same statistics can probably be looked up for children acquiring covid at school, because like where else do they get it. $100 HEPAs in every classroom, $40 reusable p100 masks for every teacher, ask parents to voluntarily mask their kids.

Just share stories. You see [1], go to your social media and share it, “she got COVID like 3 or 4 times and it seems to have caused chronic kidney disease, even with vaccines reducing the rate of straight up death, we need get this thing under control otherwise our generation's not going to live to see 80. Each time you get it there is another small chance of long covid, another rolling the dice hoping for no snake eyes. Masking is just saying you want to roll the dice fewer times.”

Organize or join a covid slack channel at work, there are other groups like https://publichealthactionnetwork.org/ , if BTS Army can connect over K-pop we can connect over wanting to protect others.

1. https://x.com/HollyMars2/status/1936235382816784443?t=A4RaLq...

crdrost commented on Homomorphically Encrypting CRDTs   jakelazaroff.com/words/ho... · Posted by u/jakelazaroff
Joker_vD · 2 months ago
I... still can't make heads or tails out of this description. Let me restate how I understand the scheme in TFA: there are two people, editing on the same document using CRDTs. When one person makes an edit, they push an encrypted CRDT to the sync server. Periodically, each of them pulls edits made by the other from the sync server, apply them to their own copy, and push the (encrypted) result back. Because of CRDT's properties, they both end up with the same document.

This scheme doesn't require them two people to be on-line simultaneously — all updates are mediated via the sync server, after all. So, where am I wrong?

crdrost · 2 months ago
So, there is a reason that CRDT researchers would not like this response that you have given, but down-thread from you it's not why the author jakelazaroff didn't like it, but it's worth giving this answer too.

The reason CRDT researchers don't like the sync server is, that's the very thing that CRDTs are meant to solve. CRDTs are a building-block for theoretically-correct eventual consistency: that's the goal. Which means our one source-of-truth now exists in N replicas, those replicas are getting updated separately, and now: why choose eventual consistency rather than strong consistency? You always want strong consistency if you can get it, but eventually, the cost of syncing the replicas is too high.

So now we have a sync server like you planned? Well, if we're at the scale where CRDTs make sense then presumably we have data races. Let's assume Alice and Bob both read from the sync server and it's a (synchronous, unencrypted!) last-write-wins register, both Alice and Bob pull down "v1" and Alice writes "v1a" to the register and Bob in parallel writes "v1b" as Alice disconnects and Bob wins because he happens to have the higher user-ID. Sync server acknowledged Alice's write but it got lost until she next comes online. OK so new solution, we need a compare-and-swap register, we need Bob to try to write to the server and get rejected. Well, except in the contention regime that we're anticipating, this means that we're running your sync server as a single-point-of-failure strong consistency node, and we're accepting the occasional loss of availability (CAP theorem) when we can't reach the server.

Even worse, such a sync server _forces_ you into strong consistency even if you're like "well the replicas can lose connection to the sync server and I'll still let them do stuff, I'll just put up a warning sign that says they're not synced yet." Why? Because they use the sync server as if it is one monolithic thing, but under contention we have to internally scale the sync server to contain multiple replicas so that we can survive crashes etc. ... if the stuff happening inside the sync server is not linearizable (aka strongly consistent) then external systems cannot pretend it is one monolithic thing!

So it's like, the sync server is basically a sort of GitHub, right? It's operating at a massive scale and so internally it presumably needs to have many Git-clones of the data so that if the primary replica goes down then we can still serve your repo to you and merge a pull request and whatever else. But then it absolutely sucks to merge a PR and find out that afterwards, it's not merged, so you go into panic mode and try to fix things, only for 5 minutes later to discover that the PR is now merged. And if you've got a really active eventually consistent CRDT system that has a lot of buggy potential.

For the CRDT researcher the idea of "we'll solve this all with a sync server" is a misunderstanding that takes you out of eventual-consistency-land. The CRDT equivalent that lacks this misunderstanding is, "a quorum of nodes will always remain online (or at least will eventually sync up) to make sure that everything eventually gets shared," and your "sync server" is actually just another replica that happens to remain online, but isn't doing anything fundamentally different from any of the other peers in the swarm.

crdrost commented on OpenAI slams court order to save all ChatGPT logs, including deleted chats   arstechnica.com/tech-poli... · Posted by u/ColinWright
sahila · 3 months ago
How do you manage deleting data from backups? Do you know not take backups?
crdrost · 3 months ago
"When data subjects exercise one of their rights, the controller must respond within one month. If the request is too complex and more time is needed to answer, then your organisation may extend the time limit by two further months, provided that the data subject is informed within one month after receiving the request."

Backup retention policy 60 days, respond within a week or two telling someone that you have purged their data from the main database but that these backups exist and cannot be changed, but that they will be automatically deleted in 60 days.

The only real difficulty is if those backups are actually restored, then the user deletion needs to be replayed, which is something that would be easy to forget.

crdrost commented on Geometrically understanding calculus of inverse functions (2023)   tobylam.xyz/2023/11/27/in... · Posted by u/tobytylam
crdrost · 4 months ago
Upvoted for the cute proof- without-words geometrical diagram of the Legendre transform, but the fact that you defined the inverse map as (x, y) to (\hat y, \hat x) I found impossible to keep my head straight. Probably it's easy if I slow down and stop skimming the article.

IMO the easier derivation — may just be personal tastes as someone more on the engineering side — is just another integration by parts.

So with f(g(x)) = g(f(x)) = x, define y = g(x) for U-substitution with x = f(y):

    ∫ g(x) dx = ∫ g(f(y)) f'(y) dy
              = ∫ y f'(y) dy
              = y f(y) – ∫ f(y) dy
              = g(x) x – F(g(x)) + C
The more interesting thing is that this is a really basic integration by parts which means that this diagram of yours that I like, is more universal than it appears at first? I'd have to think about that a bit more, how you can maybe graphically teach integration by parts that way, is there always a u substitution so that you can get u f(u) or so and get this nice pretty rectangle in a rectangle... hmm.

Deleted Comment

crdrost commented on California is nearly out of license plate numbers   sfchronicle.com/californi... · Posted by u/rntn
ryan-duve · 4 months ago
> The current 9-series configuration, which will end with 9ZZZ999, is projected to end sometime in 2026... The next generation of license plates will flip that structure on its head, moving to a “Numeral Numeral Numeral Alpha Alpha Alpha Numeral” format — such as 000AAA0.

Does anyone know why they care about this structure? Naively, there are 36^7 (minus edge cases) combinations available, which will always be sufficient.

crdrost · 4 months ago
So for example the capital O on license plates in California is only distinguished from the zero by being slightly more squarish, the capital G is mostly distinguished from six by six being slightly more smooth and diagonal in its top arc. I and one are a bit further visually, as are B and 8, but it would probably fool a traffic camera that was taking down plates automatically.

In addition, all-numbers-plates, I believe, are reserved by California exempt plates (emergency vehicles, police), and vanity plates are absolutely a thing, much more likely to start and/or end on a letter, so that's why you see numbers at the beginning and end. Like you can kinda see “6EIC023” and say “oh yeah my car looks like an ad for Geico” but because the start and end are numbers it doesn't occur to most people.

u/crdrost

KarmaCake day3416January 22, 2019
About
Email via [my HN name] at the Google email site.
View Original