Readit News logoReadit News
gabrielgoh commented on Epidemic Calculator   gabgoh.github.io/COVID/in... · Posted by u/mxfh
nicpottier · 5 years ago
Super well done. Have you shared this with epidemiologists who have validated it? Hard to avoid some bugs no matter how careful you are!

What are the default values representing out of curiosity?

Seems the biggest bit here is the pause to intervention and how big your R0 is after intervention. Anything over 1 for the US population and it seems to get ugly fast.

gabrielgoh · 5 years ago
(author here)

default values are the best guesses for the parameters of the novel coronavirus based on my reading of the literature

gabrielgoh commented on Epidemic Calculator   gabgoh.github.io/COVID/in... · Posted by u/mxfh
XaspR8d · 5 years ago
Not sure the UI is working the same for me -- dragging anywhere along the time axis, including the first death and hospitalization peak waypoints, scales the amount of time shown on the graph. It doesn't appear to affect the model.
gabrielgoh · 5 years ago
(author here) - that is correct, those aren't sliders, just waypoints
gabrielgoh commented on Epidemic Calculator   gabgoh.github.io/COVID/in... · Posted by u/mxfh
kangnkodos · 5 years ago
I would also like to see the following features:

- Enter the number of respirators in a country.

- Enter two fatality rates, with and without respirators.

Also, I don't understand why the peak number of hospitalizations would be so sensitive to the initial number of infections. That doesn't look right to me.

gabrielgoh · 5 years ago
good questions! (author here)

I've had a hard time trying to find hard figures on these numbers, and am trying to steer as much from speculation as possible.

Your second observation is a very good one. This is true, e.g. for the default intervention. Adding initial infections has a similar effect to waiting, and delaying an intervention can have a tremendous effect (at least according to the model) on the course of the epidemic

gabrielgoh commented on Optimization: An Introduction (2006) [pdf]   www3.imperial.ac.uk/pls/p... · Posted by u/p0llard
mathnmusic · 7 years ago
Can't every constrained optimization problem be converted into an unconstrained one? For eg, instead of saying "minimize f(x) with the constraint g(x) = 0", can't we just say "minimize f(x) + abs(g(x)) * 10^30" ?

Also, is there a reason why most optimization texts (like this one) only discuss point optimization and not path optimization (i.e. calculus of variations) ?

gabrielgoh · 7 years ago
See the literature on exact penalty methods. I believe the short of it is yes, for a large class of problems this works, but the new problem will not be necessarily easier to solve. In the case of the abs(.) function, the nonlinearity at 0 makes subgradient methods slow, and the large constant in front of the abs(.) might prove numerically unstable.
gabrielgoh commented on This is the most complete guide to finding anyone’s email   blurbiz.io/blog/the-most-... · Posted by u/smtd90
thinkloop · 7 years ago
How?
gabrielgoh · 7 years ago
email them
gabrielgoh commented on A high bias low-variance introduction to Machine Learning for physicists   physics.bu.edu/~pankajm/M... · Posted by u/yoquan
pure-awesome · 7 years ago
I get the idea that the title "High Bias, Low Variance" is some sort of reference or pun, but I'm not quite getting it...

I mean, I know that there's a bias-variance tradeoff in stats and ML, but what does it mean in the context of introduction to ML for physicists?

My guess is they mean they aren't going into as heavy detail in ML, which means the reader may lack some knowledge (high bias) but won't miss the forest/wood for the trees (low variance).

Anyone else care to speculate?

gabrielgoh · 7 years ago
I guess the author is not claiming a well rounded (low bias) introduction to ML to statistics, but a highly biased and specialized (low variance) course that is tailored to the author's own interests and tastes.
gabrielgoh commented on Mathematical Illustrations: A manual of geometry and postscript   math.ubc.ca/~cass/graphic... · Posted by u/noch
amelius · 7 years ago
One thing that bothers me (Chapter 6, page 11) is how they approximate a circle using quadratic Bezier curves, and say that "an approximation by eight quadratic curves is just about indistinguishable from a true circle." However, if you look at the picture on the next page, you can clearly see the difference.
gabrielgoh · 7 years ago
the illustration shows 4 an approximation with quadratic curves. There's no point drawing the one with 8 as it would be indistinguishable, as the article points out
gabrielgoh commented on A UI Experiment with the iPhone X’s Front-Facing Camera   fastcodesign.com/90162217... · Posted by u/dirtyaura
mabedan · 8 years ago
I'm pretty sure the identical effect can be (and has been) produced by the combination of normal front facing camera, accelerometer and gyroscope.
gabrielgoh · 8 years ago
I don't think so. This effect, at least as described, can change even as the phone remains static, e.g. if you head moves while the phone sits still on the table.
gabrielgoh commented on Machine learning algorithms used to decode and enhance human memory   wired.com/story/ml-brain-... · Posted by u/prostoalex
allenz · 8 years ago
This study is interesting, but it's not really AI and it's not really novel.

The researchers fit a regression to predict word recall from high-frequency EEG activity when memorizing the word. We've known for several years that high-frequency activity predicts memory success, so this part isn't new.

In addition, several papers have tried to improve memory through high-frequency stimulation from brain implants, with various results. This paper proposes "closed-loop" stimulation, delivering stimulation only when the classifier predicts failure. They find that closed-loop is effective.

What the authors really want to claim is that closed-loop is more effective than open-loop, because otherwise their fancy "AI" classifier is useless. Surprisingly, this study does not compare closed-loop vs. open-loop.

gabrielgoh · 8 years ago
could you clarify what you mean by an open-loop system, and why it must be compared with a closed-loop one?
gabrielgoh commented on Non-Convex Optimization for Machine Learning   arxiv.org/abs/1712.07897... · Posted by u/jonbaer
Xcelerate · 8 years ago
From the preface:

> Put a bit more dramatically, [this monograph] will seek to show how problems that were once avoided, having been shown to be NP-hard to solve, now have solvers that operate in near-linear time, by carefully analyzing and exploiting additional task structure!

This is something I've noticed in my own research on inverse problems (signal recovery over the action of compact groups). And it's really quite mind-blowing. What this means is that you can randomly generate problems, and these will be NP-hard to solve. However, assuming the problem is not randomly generated (i.e., there is some regularity in the generative process that produced the data), there often appears to be some inherent structure that can be exploited to solve the problem quickly to its global optimum.

I feel like future research will focus on finding the line that divides the "tractable" problems from the "intractable" ones.

gabrielgoh · 8 years ago
A simple example of this that has been shown rigorously is compressed sensing. Finding the sparsest vector, subject to linear constraints Ax = b is NP hard for general matrices, but is solvable in polynomial time if A satisfies the RIP property (e.g. w.h.p if A is generated by randomly sampling gaussians for each entry). Quite surprising!

u/gabrielgoh

KarmaCake day429September 5, 2016View Original