- Enter the number of respirators in a country.
- Enter two fatality rates, with and without respirators.
Also, I don't understand why the peak number of hospitalizations would be so sensitive to the initial number of infections. That doesn't look right to me.
I've had a hard time trying to find hard figures on these numbers, and am trying to steer as much from speculation as possible.
Your second observation is a very good one. This is true, e.g. for the default intervention. Adding initial infections has a similar effect to waiting, and delaying an intervention can have a tremendous effect (at least according to the model) on the course of the epidemic
Also, is there a reason why most optimization texts (like this one) only discuss point optimization and not path optimization (i.e. calculus of variations) ?
I mean, I know that there's a bias-variance tradeoff in stats and ML, but what does it mean in the context of introduction to ML for physicists?
My guess is they mean they aren't going into as heavy detail in ML, which means the reader may lack some knowledge (high bias) but won't miss the forest/wood for the trees (low variance).
Anyone else care to speculate?
The researchers fit a regression to predict word recall from high-frequency EEG activity when memorizing the word. We've known for several years that high-frequency activity predicts memory success, so this part isn't new.
In addition, several papers have tried to improve memory through high-frequency stimulation from brain implants, with various results. This paper proposes "closed-loop" stimulation, delivering stimulation only when the classifier predicts failure. They find that closed-loop is effective.
What the authors really want to claim is that closed-loop is more effective than open-loop, because otherwise their fancy "AI" classifier is useless. Surprisingly, this study does not compare closed-loop vs. open-loop.
> Put a bit more dramatically, [this monograph] will seek to show how problems that were once avoided, having been shown to be NP-hard to solve, now have solvers that operate in near-linear time, by carefully analyzing and exploiting additional task structure!
This is something I've noticed in my own research on inverse problems (signal recovery over the action of compact groups). And it's really quite mind-blowing. What this means is that you can randomly generate problems, and these will be NP-hard to solve. However, assuming the problem is not randomly generated (i.e., there is some regularity in the generative process that produced the data), there often appears to be some inherent structure that can be exploited to solve the problem quickly to its global optimum.
I feel like future research will focus on finding the line that divides the "tractable" problems from the "intractable" ones.
What are the default values representing out of curiosity?
Seems the biggest bit here is the pause to intervention and how big your R0 is after intervention. Anything over 1 for the US population and it seems to get ugly fast.
default values are the best guesses for the parameters of the novel coronavirus based on my reading of the literature