** Pls bear with me if its been already discussed or not related **
With all the proliferation of DL and LLM along with near unlimited compute, energy and bandwidth do we still need classical ML approach for solving the problems? Is DL / NN going to take over everything?
If Domain Expertise + Feature Engineering + ML model can get you 90% of the way there and it runs on a tiny cloud instance that takes 30 minutes to train, is a DL based approach that pushes you to 91% worth it from an ROI instance if takes a 4xGPU cluster 2 days to train it, not to mention inference costs? Especially if you need to explain what the model is doing?"
This above is exactly the situation I'm in now with my job. I'm on the "Get useful stuff to production so we can save money" side of things, and we have R&D teams who try to approach the same problems using DL and all the latest methods. At least for the use cases our team focuses on, they haven't been able to do more than set $$$ on fire via GPUs. For us, Domain Knowledge + Good Data Engineering is the secret.
I think ML is going to be around for a long time because it works, even though DL is dominating the news right now. Just because a neurologist can also diagnose and treat common medical conditions (e.g a pneumonia), that doesn't mean we need every doctor to be neurologist.
ML isn't deep learning or not deep learning. It's fundamentally to me about a statistical formulation of a business problem.
It's how you would evaluate ML, formulate business tasks into an objective function, understand and develop training data, and what the features actually measure what’s important in the domain.
There are problems where classical ML works fine and if it works, why change it?
In text classification it depends on the problem but often the old methods work very well and there is not a lot of room for neural methods to do better.
For images or audio however I think a deep network would almost always be in the picture.
Often people use a pretrained neural network to make an embedding and then use classical ML methods to make a classifier that works on that embedding.
The data prep and evaluation process is very much the same no matter what kind of model you are using.
This couldnt be further from the truth. NLP/text algorithms have seen model improvements from NNs more than any other field.
You might have some domain where the new models work for you and I'd love to hear you talk about it.
I see a lot of papers go by in arXiv and also blog postings by data science people and would say that the behavior of a classifier can be limited by many things. For the most part bag-of-word classifiers do very well for classifying topics because topics involve very different vocabulary. They do not do so well at sentiment analysis where you have to know "not good" = "bad".
I worked at one place that had a CNN classifier that could classify random snippets as "address", "full name", etc. but it wasn't able to learn how to calculate credit card checksums.
For some problems 90% accuracy is very bad (e.g. predict some event that happens 1 in 10 times like a headline getting a few comments on HN -- a fancy classifier could probably do better than my simple classifier it is not going put up a dramatically better AUC because of the fuzziness of the problem) Even crisper concepts get controversial around around 1 time in 20.
My simple classifier is fast to learn I like articles about classifiers and don't like articles about theoretical CS but it struggles to tell I like the NFL and hate the Premier League, a fancy classifier could do better, and I will give one a chance because i have the data to do it with.
With the simple classifier it is simple to do cross-validation, parametric tuning and such, but usually people publishing results on deep models do not publish error bars, do not understand how the quality of the model varies from run-to-run, etc. Even if you ask ChatGPT to do it you will need to supply a large number of test cases to prove it gets the result.
I think one thing to keep in mind is that there are specific use cases where the cost of using DL isn't worth the improvement in accuracy (if there is one) from a business ROI perspective.
I know somebody who works in the insurance industry on a text classification use case. The business impact of this use case is important as it's used as part of the claims process. The team he's on has tried a lot of different things, but feature engineering + domain expertise + a particular tree ML model has provided the best performance for the lowest overall cost. They are very open to trying new things, but a DL approach simply hasn't been worth it.
That said, it's pretty saturated as a field of study. People work on uncertainty quantification etc. But it's unclear what numbers people would want to improve.
It will only take over the cases where you have vast swaths of data, don't have reasonable preprocessing approaches that simplify the task, and don't need statistical guarantees.