You could think of it as "SICP for Prolog".
You could think of it as "SICP for Prolog".
Has anyone come up with an ed with the advantages of a modern screen, so it shows the full file on the side but you can still run ed commands on it?
Edit: I should have expected getting "vi" as an answer, although I can't fault all the commenters below :P I was thinking of something a little less different, like literally just a command prompt on the bottom and a pane above centered on the line you just operated on.
s/replace #/with $/
s#replace /#with _#
So now consider, most of the underlying loans have to default and the recovery rate has to be below battle tested assumptions before the top tiers get risky. This is very very unlikely to happen given the Fed and the US Govt. have done so much and are committed to do more to stave off a severe depression / recession. It's more like people were killing it buying the safer parts at distressed prices as over leveraged funds shed them on the back of margin calls in March.
> We already know that a significant majority of the loans in CLOs have weak covenants that offer investors only minimal legal protection; in industry parlance, they are “cov lite.” The holders of leveraged loans will thus be fortunate to get pennies on the dollar as companies default—nothing close to the 70 cents that has been standard in the past.
I'd be tempted to down-vote myself for snarky trolling except that I work in the field of psychological research, and perhaps it is my bias, but many of the cognitive biases that came from social-psychology research do not stand up to scrutiny, too frequently resulting from bad statistical practice...at least two decades ago.
Does anyone have any recommendations?
There is a theoretically stable algorithm for the classical problem called the Remez exchange algorithm, and an extension to complex domains due to P.T.P. Tang in his 1987 PhD thesis at Berkeley. Theoretically Remez and its complex extension are very stable, but unfortunately implementations my advisor and I are aware of seem to struggle with large degree polynomials, where large is bigger than say n=45 -- errors begin to explode.
In any case, independently of this I've been learning more of the nitty gritty details of deep learning for a project at work (I'm a SWE in my day job, the math is more moonlighting), so to ground my efforts there I've been exploring deep learning approaches to this problem of complex uniform approximation, implementing results from various papers and tweaking things for my use case, and coming up with questions. That's much of what I'm thinking about this week!
Also, I'll be having a half-day long ADHD evaluation session on Friday -- so a bit apprehensive about that.