Deleted Comment
- In addition to transaction data, each validator stores three sets: CURRENT, the current validator set; IN_PENDING, the set of clients who are to join the validator set; OUT_PENDING, the set of validators who are to leave the validator set.
- Validators support four additional requests: v_nominate, v_initialize, v_remove, v_eject.
- When a client wants to join the validator set, it sends the v_nominate request to all validators. Validators who agree add the client to IN_PENDING, sign the tuple (CURRENT, IN_PENDING) and reply.
- If the candidate client receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_initialize request to all validators (along with the signatures). Validators receiving this request remove this candidate from IN_PENDING and add it to CURRENT.
- When a validator wants to remove a validator (can ask to remove itself) from the set, it sends the v_remove request to all validators. Validators who agree add the outgoing validator to OUT_PENDING, sign the tuple (CURRENT, OUT_PENDING) and reply.
- If the validator who originates the removal request receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_eject request to all validators (along with the signatures). Validators receiving this request remove the outgoing validator from OUT_PENDING and CURRENT.
Wouldn't arguments similar to the ones in the article also work for showing consensus on these sets?
(edited to fix formatting and typos)
https://github.com/FastFilter/fastfilter_java/blob/master/fa...
https://github.com/FastFilter/fastfilter_java/blob/master/fa...
It should be relatively easy to port it to other programming languages.
Compared to regular counting Bloom filters, there are some advantages (e.g. uses half the space, lookup is much faster, no risk of overflows). It has a disadvantage: add+remove are slower (currently). Cuckoo filters need less space, but otherwise advantages+disadvantages are the same.
For more questions, just open an issue on that project (I'm one of the authors).
For static sets (where you construct the filter once and then use it for lookup), blocked Bloom filters are the fastest, for lookup. They do need a bit more space (maybe 10% more than Bloom filters). Also very fast are binary fuse filters (which are new), and xor filters. They also save a lot of space compared to others. Cuckoo filters, ribbon filters, and Bloom filters are a bit slower. It's a trade-off between space and lookup speed really.
For dynamic sets (where you can add and remove entries later), the fastest (again for lookup) are "Succinct counting blocked Bloom filter" (no paper yet for this): they are a combination of blocked Bloom filters and counting Bloom filters, so lookup is identical to the blocked Bloom filter. Then cuckoo filters, and counting Bloom filters.
The paper's calculation reminds me of Kaluza-Klein theory, which uses a similar construction as part of extending the metric from four dimensions to five:
I can somewhat see how to interpret the mathematics in free space. But what about when there are massive bodies in the picture? They will result in a non-flat metric... does that imply they create their own electromagnetism?
I have no time to go look at all those sources now, but having dabbled in nets since the late '80s myself, I remember vanishing gradients were sort-of a surprise, because everyone was under the impression that simple backpropagation should just work, and it didn't.
A lot of that early work you refer to was also mostly about linear transfer functions, and though the exact type of non-linearity doesn't matter, some of its properties do - and as I mentioned, sigmoids - which were all the rage in the '80s - are a dead end with the wrong kind of nonlinearity.
Nothing about the
structure* of multilayer models is new. But successfully training them - which didn't happen until Schmidhuber and Hinton (depends on who you ask ...) - is relatively new; and that advance is responsible for the term "deep learning".We do not disagree about the details; but we do seem to disagree about the historical context and narrative.
Hi Rippling Customer,
Yesterday afternoon, Rippling learned that Silicon Valley Bank (SVB) had solvency challenges. We have been working with SVB to ensure timely payments to our customers’ employees. However, this morning we learned that the FDIC had stepped in and taken control of SVB.
We are reaching out to you because you have a payroll that has already been processed for 3/15/2023. Currently, these funds may be sitting with SVB. We are closely monitoring the FDIC takeover and what it means for this pay run. We ask that you please reach out to your bank and request that your bank return any ACH transactions debited from your account by Rippling into SVB under the premise that the transaction(s) are unauthorized, since the bank has ceased operations and is unable to honor the payments.
If the bank agrees to issue a return, you will then need to send a wire to Rippling for the full amount of the 3/15/2023 payroll run by Tuesday 3/14/2023 at 12 PM PST. This help center article has updated wire instructions. Your 3/15/2023 payroll will be marked as Non-sufficient Funds (NSF) on Rippling’s end, but as long as we have received the wire, Rippling will issue employee payments for 3/15/2023 via our new banking partner, JP Morgan Chase & Co.
If the bank does not agree to issue a return, we will follow up with additional instructions. If you have questions, please reach out to our support team.
Thanks, The Rippling Team
Rippling should be working with FDIC to sort out these in-flight payments, not asking their customers this. When I reached their support team for what the additional instructions are, I couldn't get an answer. This situation sure doesn't look great.