If you already know a bit about Machine Learning don't read this, you will gain nothing.
If you don't already know about Machine Learning don't read this, you will not learn anything here.
I have no idea how this can be trending. Over-simplified, generalized bs.
> If you already know a bit about Machine Learning don't read this, you will gain nothing. If you don't already know about Machine Learning don't read this, you will not learn anything here. I have no idea how this can be trending. Over-simplified, generalized bs.
>> Figure-2 shows that performance of deep learning is much better than non-deep learning algorithm.
This is a nonsensical over-generalization.
>> In addition, it is automatically do the feature extraction.
At the cost of interpretability. Let's not even mention the dreadful nights of tweaking parameters (such as dropout probability, activation function, network architecture, learning rate, optimization function, various pre-processing tricks, pre-training to warm-start, convolution parameters, maxpool parameters, and so much more).
Whenever I read about deep learning, I immediately think about Microsoft's Tay AI nazi chatbot disaster.
Anyway, as a non-expert I've always thought that inferential statistical methods only work adequately when correct assumptions about the underlying distribution, hence also about the underlying analytical model and/or causal relationships are made, and I wonder how deep learning approaches deal with that issue.
How do humans deal with the same question? Context. Understanding. Background knowledge. The go-to rebuke of the smug liberal cognoscenti: 'educate yourself'.
To be able to interact appropriately with humans, AI needs to understand not only its own subject matter but the preconceptions and prejudices of the humans with whom it interacts.
What BinRoo called "parameters" are usually known as "hyperparameters" (parameters are the settings that are learned through gradient descent). There are methods to learn hyperparameters automatically [1], but they're extremely computationally intensive. Even with a single set of hyperparameters, it can take hours to days for training to converge. Now imagine having to run hundreds of experiments to find the best setting.
Not only that it can be done, but it works even by random trial. Sophisticated optimization techniques work out just 2x faster, so if you have cheap GPU, you can run 2x more trials and get your hyperparameters fine tuned.
If you want to apply optimization, you can use one of the popular libraries like hyperopt and MOE.
Good question! Brute-force searching to find values of a parameter is a start. But, you see the issue here, right? It'll take a longass™ time. Lucky for us, learning the weight parameters of a neural network can be accomplished using back-propagation (or other methods) to speed up this search, and we do not need to brute force all possibilities. That same technique is not always applicable for hyper-parameters.
> I know nothing about deep learning but I'm curious. Why can't tweaking parameters be automated? Why is human intuition necessary here?
ok, well, imagine (for simplicity) that you are standing on a 3d-surface with mountains and valleys and such. you can only see so far. you need to walk/run etc. to get to a pot of gold which is there at the global minima of this landscape.
what do you do ? one option is choose gradient descent :) start walking somewhere, and see if the your height, from your previous position, is decreasing. if it is, then maybe you are on the right track...
indeed. much better off reading this instead: http://karpathy.github.io/neuralnets/
First they come with
Well, sounds easy.suddendly
wait what?!This is a nonsensical over-generalization.
>> In addition, it is automatically do the feature extraction.
At the cost of interpretability. Let's not even mention the dreadful nights of tweaking parameters (such as dropout probability, activation function, network architecture, learning rate, optimization function, various pre-processing tricks, pre-training to warm-start, convolution parameters, maxpool parameters, and so much more).
Anyway, as a non-expert I've always thought that inferential statistical methods only work adequately when correct assumptions about the underlying distribution, hence also about the underlying analytical model and/or causal relationships are made, and I wonder how deep learning approaches deal with that issue.
How do humans deal with the same question? Context. Understanding. Background knowledge. The go-to rebuke of the smug liberal cognoscenti: 'educate yourself'.
To be able to interact appropriately with humans, AI needs to understand not only its own subject matter but the preconceptions and prejudices of the humans with whom it interacts.
Deep learning is not "better" than other forms of algorithmic prediction. There's a best tool for every job
[1] https://papers.nips.cc/paper/4443-algorithms-for-hyper-param...
Not only that it can be done, but it works even by random trial. Sophisticated optimization techniques work out just 2x faster, so if you have cheap GPU, you can run 2x more trials and get your hyperparameters fine tuned.
If you want to apply optimization, you can use one of the popular libraries like hyperopt and MOE.
ok, well, imagine (for simplicity) that you are standing on a 3d-surface with mountains and valleys and such. you can only see so far. you need to walk/run etc. to get to a pot of gold which is there at the global minima of this landscape.
what do you do ? one option is choose gradient descent :) start walking somewhere, and see if the your height, from your previous position, is decreasing. if it is, then maybe you are on the right track...
Resource Limit Is Reached
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.
Deleted Comment
Dead Comment
Dead Comment