Readit News logoReadit News

Deleted Comment

Deleted Comment

twillmas commented on Inverting facial recognition models   blog.floydhub.com/inverti... · Posted by u/whatrocks
twillmas · 7 years ago
How does Apple improve their bionic processor / FaceID if the embeddings never leave the Secure Enclave? Do they simply have to test this themselves at their offices?
twillmas commented on Spinning Up a Pong AI with Keras and OpenAI   blog.floydhub.com/spinnin... · Posted by u/whatrocks
twillmas · 7 years ago
If the in-game hard-coded AI just follows the ball, how do you ever beat it? I’d like to see the logic behind that 1970s opponent paddle AI, especially as it ramps up difficulty.
twillmas commented on Transfer learning in low-data environments with FloydHub, fast.ai, and PyTorch   blog.frame.ai/learning-mo... · Posted by u/robbiemitchell
twillmas · 7 years ago
Looks like this task focused on binary sentiment analysis (positive or negative movie reviews) - have you tried this on something with a broader potential output space? This seems relevant for what you’re calling “neural tags” on your client’s customer conversations, which seems more open-ended than simply “positive” or “negative”.
twillmas commented on Spinning Up a Pong AI with Deep Reinforcement Learning   blog.floydhub.com/spinnin... · Posted by u/whatrocks
twillmas · 7 years ago
I’d like to see this AI hooked up to an actual Atari machine somehow. Has anyone tried something like that? Could the model process the frames quickly enough to move a robotic arm up or down on a joystick?
twillmas commented on Reading minds with deep learning   blog.floydhub.com/reading... · Posted by u/whatrocks
twillmas · 7 years ago
Curious about differences in brain waves for desire versus intent. I might desire to say something awful to that person who just honked at me, but I probably shouldn’t - and right now wouldn’t. Would a BCI system be able to tell the difference?
twillmas commented on Coding the History of Deep Learning   blog.floydhub.com/coding-... · Posted by u/saip
dpcx · 8 years ago
This seems like a great introduction to the history. I have a problem with it, though.

In the first example, the method compute_error_for_line_given_points is called with values 1, 2, [[3,6],[6,9],[12,18]]. Where did those values come from?

Later in that same example, there is an "Error = 4^2 + (-1)^2 + 6^2". Where did those values come from?

Later, there's another form: "Error = x^5 - 2x^3 -2" What about these?

There seem to be magic formulae everywhere, with no real explanation in the article about where they came from. Without that, I have no way of actually understanding this.

Am I missing something fundamental here?

twillmas · 8 years ago
I'd also like to see more of a "teaching" post that can walk through the math incrementally.

Many of the deep learning courses assume "high school math", but my school must have skipped matrices, so I've been watching Khan Academy videos.

Are there any good posts / books on walking through the math of deep learning from a true beginner's perspective?

u/twillmas

KarmaCake day65September 21, 2017View Original