Readit News logoReadit News
ozankabak commented on SF payroll firm Rippling has to delay payouts after Silicon Valley Bank collapse   sfgate.com/tech/article/r... · Posted by u/Akharin
jmvoodoo · 3 years ago
We just received this from Rippling:

Hi Rippling Customer,

Yesterday afternoon, Rippling learned that Silicon Valley Bank (SVB) had solvency challenges. We have been working with SVB to ensure timely payments to our customers’ employees. However, this morning we learned that the FDIC had stepped in and taken control of SVB.

We are reaching out to you because you have a payroll that has already been processed for 3/15/2023. Currently, these funds may be sitting with SVB. We are closely monitoring the FDIC takeover and what it means for this pay run. We ask that you please reach out to your bank and request that your bank return any ACH transactions debited from your account by Rippling into SVB under the premise that the transaction(s) are unauthorized, since the bank has ceased operations and is unable to honor the payments.

If the bank agrees to issue a return, you will then need to send a wire to Rippling for the full amount of the 3/15/2023 payroll run by Tuesday 3/14/2023 at 12 PM PST. This help center article has updated wire instructions. Your 3/15/2023 payroll will be marked as Non-sufficient Funds (NSF) on Rippling’s end, but as long as we have received the wire, Rippling will issue employee payments for 3/15/2023 via our new banking partner, JP Morgan Chase & Co.

If the bank does not agree to issue a return, we will follow up with additional instructions. If you have questions, please reach out to our support team.

Thanks, The Rippling Team

ozankabak · 3 years ago
We bank with Mercury and I couldn't get a hold of them in business hours after receiving this email at 5:30PM CET on a Friday (unsurprisingly). This email seems odd to me, Mercury shows a transaction posted as of March 9 for employee payments, and another posted as of March 10 for tax payments. Had I was successful in contacting Mercury, would they even be able to reverse them?

Rippling should be working with FDIC to sort out these in-flight payments, not asking their customers this. When I reached their support team for what the additional instructions are, I couldn't get an answer. This situation sure doesn't look great.

Deleted Comment

ozankabak commented on A Limitlessly Scalable Transaction System   arxiv.org/abs/2108.05236... · Posted by u/belter
ozankabak · 5 years ago
Does the set of validators really need to be static (or fixed-size)? I may be missing something obvious, but it seems like we can also support a dynamic set. Consider the following scheme:

- In addition to transaction data, each validator stores three sets: CURRENT, the current validator set; IN_PENDING, the set of clients who are to join the validator set; OUT_PENDING, the set of validators who are to leave the validator set.

- Validators support four additional requests: v_nominate, v_initialize, v_remove, v_eject.

- When a client wants to join the validator set, it sends the v_nominate request to all validators. Validators who agree add the client to IN_PENDING, sign the tuple (CURRENT, IN_PENDING) and reply.

- If the candidate client receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_initialize request to all validators (along with the signatures). Validators receiving this request remove this candidate from IN_PENDING and add it to CURRENT.

- When a validator wants to remove a validator (can ask to remove itself) from the set, it sends the v_remove request to all validators. Validators who agree add the outgoing validator to OUT_PENDING, sign the tuple (CURRENT, OUT_PENDING) and reply.

- If the validator who originates the removal request receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_eject request to all validators (along with the signatures). Validators receiving this request remove the outgoing validator from OUT_PENDING and CURRENT.

Wouldn't arguments similar to the ones in the article also work for showing consensus on these sets?

(edited to fix formatting and typos)

ozankabak commented on Bloom Filters: More than a space-efficient hashmap   boyter.org/posts/bloom-fi... · Posted by u/boyter
thomasmg · 5 years ago
Sure. So far, there are only two Java implementations. One is using "Rank" and the other is using "Select":

https://github.com/FastFilter/fastfilter_java/blob/master/fa...

https://github.com/FastFilter/fastfilter_java/blob/master/fa...

It should be relatively easy to port it to other programming languages.

Compared to regular counting Bloom filters, there are some advantages (e.g. uses half the space, lookup is much faster, no risk of overflows). It has a disadvantage: add+remove are slower (currently). Cuckoo filters need less space, but otherwise advantages+disadvantages are the same.

For more questions, just open an issue on that project (I'm one of the authors).

ozankabak · 5 years ago
Thank you!
ozankabak commented on Bloom Filters: More than a space-efficient hashmap   boyter.org/posts/bloom-fi... · Posted by u/boyter
thomasmg · 5 years ago
There are many alternatives to Bloom filters, but some variants of Bloom filters are still competitive. I'm one of the authors of some benchmarks for filters: https://github.com/FastFilter/fastfilter_cpp (this is based on the cuckoo filter benchmark) and https://github.com/FastFilter/fastfilter_java

For static sets (where you construct the filter once and then use it for lookup), blocked Bloom filters are the fastest, for lookup. They do need a bit more space (maybe 10% more than Bloom filters). Also very fast are binary fuse filters (which are new), and xor filters. They also save a lot of space compared to others. Cuckoo filters, ribbon filters, and Bloom filters are a bit slower. It's a trade-off between space and lookup speed really.

For dynamic sets (where you can add and remove entries later), the fastest (again for lookup) are "Succinct counting blocked Bloom filter" (no paper yet for this): they are a combination of blocked Bloom filters and counting Bloom filters, so lookup is identical to the blocked Bloom filter. Then cuckoo filters, and counting Bloom filters.

ozankabak · 5 years ago
Where can I find more information about the "Succinct counting blocked Bloom filter"? Can you point to an implementation or a document? Thanks.
ozankabak commented on Electromagnetism is a property of spacetime itself, study finds   sciencex.com/news/2021-07... · Posted by u/egfx
al2o3cr · 5 years ago
For one, it's going to produce a metric where all the diagonal elements are positive (or zero) - different from the -+++ or +--- signature of "normal" spacetime.

The paper's calculation reminds me of Kaluza-Klein theory, which uses a similar construction as part of extending the metric from four dimensions to five:

https://en.wikipedia.org/wiki/Kaluza–Klein_theory

ozankabak · 5 years ago
I was thinking about the signature issue as well. In flat space (i.e. Minkowski metric), this would imply a constant four-potential with an imaginary 0'th component, which I can not make sense of.
ozankabak commented on Electromagnetism is a property of spacetime itself, study finds   sciencex.com/news/2021-07... · Posted by u/egfx
ozankabak · 5 years ago
IIUC the authors are saying that if we associate the metric with the four-potential via an outer product, they get a picture coherent with the current understanding of how electromagnetism "works" in GR under certain circumstances.

I can somewhat see how to interpret the mathematics in free space. But what about when there are massive bodies in the picture? They will result in a non-flat metric... does that imply they create their own electromagnetism?

ozankabak commented on What Gödel Discovered   stopa.io/post/269... · Posted by u/stopachka
ozankabak · 5 years ago
So if we have a theory expressive enough to make statements about ordinary (Peano) arithmetic, we can always form a self-referential statement within the framework of this theory which we can not prove or disprove. So far, so good. Here is my question: What happens if we restrict/weaken the theory to preclude self-referential statements? Obviously, we will lose our ability to express certain arithmetic statements which correspond to self-referential statements in the original theory. But what else? Is that the only class of statements we lose? Also, are there any other kinds of statements that still make the theory incomplete?
ozankabak commented on DeepMath Conference 2020 – Conference on the Mathematical Theory of DNN's   deepmath-conference.com/... · Posted by u/wavelander
beagle3 · 5 years ago
Indeed, this is all true. But do remember that the kolmogorov-arnold theorem says a 3-layer (n:2n+1:m) network is a universal continuous approximator (using an unknown neuron transfer function) -- people in the '80s were looking at 3 layer networks as sufficient partly because of that.

I have no time to go look at all those sources now, but having dabbled in nets since the late '80s myself, I remember vanishing gradients were sort-of a surprise, because everyone was under the impression that simple backpropagation should just work, and it didn't.

A lot of that early work you refer to was also mostly about linear transfer functions, and though the exact type of non-linearity doesn't matter, some of its properties do - and as I mentioned, sigmoids - which were all the rage in the '80s - are a dead end with the wrong kind of nonlinearity.

Nothing about the

structure* of multilayer models is new. But successfully training them - which didn't happen until Schmidhuber and Hinton (depends on who you ask ...) - is relatively new; and that advance is responsible for the term "deep learning".

We do not disagree about the details; but we do seem to disagree about the historical context and narrative.

ozankabak · 5 years ago
You are talking about Sprecher's modification to the original Kolmogorov-Arnold theorem, right? This version, and its implications, have been a lingering wondering for me for quite a while. Are you aware of any research on 3-layer networks where the unknown transfer function is also learnable? I suspect such an approach does not result in good models (otherwise we would have known about them!), but I can not articulate why. Where exactly does the K-A reasoning fail when we try to apply it in practice?

u/ozankabak

KarmaCake day28October 21, 2012View Original