Readit News logoReadit News
Posted by u/Sanej 3 years ago
Classical ML Still Relevant?
** Pls bear with me if its been already discussed or not related **

With all the proliferation of DL and LLM along with near unlimited compute, energy and bandwidth do we still need classical ML approach for solving the problems? Is DL / NN going to take over everything?

apohn · 3 years ago
One of the ways I think about this type of problem is by asking "You want to use computation to extract a signal from this data. What's that signal worth to you in business ROI dollars?"

If Domain Expertise + Feature Engineering + ML model can get you 90% of the way there and it runs on a tiny cloud instance that takes 30 minutes to train, is a DL based approach that pushes you to 91% worth it from an ROI instance if takes a 4xGPU cluster 2 days to train it, not to mention inference costs? Especially if you need to explain what the model is doing?"

This above is exactly the situation I'm in now with my job. I'm on the "Get useful stuff to production so we can save money" side of things, and we have R&D teams who try to approach the same problems using DL and all the latest methods. At least for the use cases our team focuses on, they haven't been able to do more than set $$$ on fire via GPUs. For us, Domain Knowledge + Good Data Engineering is the secret.

I think ML is going to be around for a long time because it works, even though DL is dominating the news right now. Just because a neurologist can also diagnose and treat common medical conditions (e.g a pneumonia), that doesn't mean we need every doctor to be neurologist.

Sanej · 3 years ago
this is awesome! thanks
Salgat · 3 years ago
Classical ML is still the dominant form of ML, and is preferred for most forms of tabular data (think spreadsheets). It's just far faster and often more effective than deep learning. Deep learning's greatest strength is that it can do the feature generation for you, which is great for more abstract data inputs such as pixel arrays and word sequences. Deep learning receives a lot more attention because it's doing things that normally would require a human to do.
Sanej · 3 years ago
+1
softwaredoug · 3 years ago
The "model" is the boring part of ML.

ML isn't deep learning or not deep learning. It's fundamentally to me about a statistical formulation of a business problem.

It's how you would evaluate ML, formulate business tasks into an objective function, understand and develop training data, and what the features actually measure what’s important in the domain.

Sanej · 3 years ago
awesome!
PaulHoule · 3 years ago
I don't see a discontinuity.

There are problems where classical ML works fine and if it works, why change it?

In text classification it depends on the problem but often the old methods work very well and there is not a lot of room for neural methods to do better.

For images or audio however I think a deep network would almost always be in the picture.

Often people use a pretrained neural network to make an embedding and then use classical ML methods to make a classifier that works on that embedding.

The data prep and evaluation process is very much the same no matter what kind of model you are using.

fdgsdfogijq · 3 years ago
"text classification it depends on the problem but often the old methods work very well and there is not a lot of room for neural methods to do better."

This couldnt be further from the truth. NLP/text algorithms have seen model improvements from NNs more than any other field.

PaulHoule · 3 years ago
In the sense of GPT-3. I guess you can ask GPT-3 to classify things and it gets the right answer... Sometimes.

You might have some domain where the new models work for you and I'd love to hear you talk about it.

I see a lot of papers go by in arXiv and also blog postings by data science people and would say that the behavior of a classifier can be limited by many things. For the most part bag-of-word classifiers do very well for classifying topics because topics involve very different vocabulary. They do not do so well at sentiment analysis where you have to know "not good" = "bad".

I worked at one place that had a CNN classifier that could classify random snippets as "address", "full name", etc. but it wasn't able to learn how to calculate credit card checksums.

For some problems 90% accuracy is very bad (e.g. predict some event that happens 1 in 10 times like a headline getting a few comments on HN -- a fancy classifier could probably do better than my simple classifier it is not going put up a dramatically better AUC because of the fuzziness of the problem) Even crisper concepts get controversial around around 1 time in 20.

My simple classifier is fast to learn I like articles about classifiers and don't like articles about theoretical CS but it struggles to tell I like the NFL and hate the Premier League, a fancy classifier could do better, and I will give one a chance because i have the data to do it with.

With the simple classifier it is simple to do cross-validation, parametric tuning and such, but usually people publishing results on deep models do not publish error bars, do not understand how the quality of the model varies from run-to-run, etc. Even if you ask ChatGPT to do it you will need to supply a large number of test cases to prove it gets the result.

apohn · 3 years ago
>This couldnt be further from the truth.

I think one thing to keep in mind is that there are specific use cases where the cost of using DL isn't worth the improvement in accuracy (if there is one) from a business ROI perspective.

I know somebody who works in the insurance industry on a text classification use case. The business impact of this use case is important as it's used as part of the claims process. The team he's on has tried a lot of different things, but feature engineering + domain expertise + a particular tree ML model has provided the best performance for the lowest overall cost. They are very open to trying new things, but a DL approach simply hasn't been worth it.

alpineidyll3 · 3 years ago
Basically in any cases of small data? Anything with less than 8000 points or so. It's a struggle to avoid ingesting bias into a deep model for such tiny data.

That said, it's pretty saturated as a field of study. People work on uncertainty quantification etc. But it's unclear what numbers people would want to improve.

jononor · 3 years ago
The combination can be very useful sometimes, for example for transfer learning for working with low resource datasets/problems. Use a deep neural network to go from high dimensionality data to a compact fixed length vector. Basically doing festure extraction. This network is increasingly trained on large amounts of unlabeled data, using self-supervision. Then use simple classical model like a linear model, Random Forest or k-nearest-neighbours to make model for specialized task of interest, using a much smaller labeled dataset. This is relevant for many task around sound, image, multi-variate timeseries. Probably also NLP (not my field).
Sanej · 3 years ago
cool!
PartiallyTyped · 3 years ago
> Is DL / NN going to take over everything?

It will only take over the cases where you have vast swaths of data, don't have reasonable preprocessing approaches that simplify the task, and don't need statistical guarantees.