Readit News logoReadit News
malkia · 3 years ago
Some years ago, I worked on a team "Ads Human Eval" - we had raters hired to do A/B testing for ads. These evaluated questionaires carefuly crafted by our linguists, and then analyzed by the statisticians providing feedback to the (internal) group that wanted to know more about.

So the best experience was this internal event that we had, where the raters would say that certain Ad would not fare well (long term), while the initial metrics (automated) were showing the opposite (short temr). So then we'll gather into this event, and people would "debug" these and try to find where the differences are coming through.

Then we had to help another group, where ML failed miserably detecting ads that should've not been shown on specific media, and raters came to help giving the correct answers.

The one thing that I've learned is that humans are not going to be replaced any time soon by AI, and I've been telling my folks, friends or anyone (new-born luddities) - that automation is not going to fully replace us. We'll still be needed as teachers, evaluators, fixers, tweakers/hackers - e.g. someone saying - this is right, and this is not, this needs adjustment, etc. (to the machine, ai, etc.).

Maybe machines are going to take over us one day, but until then, I'm not worried...

(I've also understood I knew nothing about staticics, and how valuable linguists are when comes to forming clear, concise and non-confusing (no double meaning) questions)

Melatonic · 3 years ago
I dont think most people are arguing that machines will replace everyone anytime soon - it is that they will replace a huge portion of people. If one person can do the job of 10,000 by being the tweaker / approver of an advanced AI that is still 9,999 jobs eliminated. That might be hyperbole (you still probably will need people to support that system)
malkia · 3 years ago
I agree, but it's true that some jobs should not simply exists.

To this day, if I go to our airport in Sofia (Bulgaria), and my baggage is over the limit of 20 or was it 25kg I have to go to another place, pay for it and come back (why? bureaucracy - not only I have to do it, but I'm slowing anyone waiting for me to this - it's like 25-50 meters one place to the other)

Unlike Frakfurt, Munich or Heathrow airport where I can pay that fine right there.

Some jobs simply should not exist.

stackbutterflow · 3 years ago
You don't even that to go as high as 10,000. Imagine someone suddenly doing the job of 10 persons, that's entire teams being decimated. Go to a job board and imagine 9 of out 10 job postings not existing. How much harder it'll be to seek another job.
sacrosancty · 3 years ago
Yea machines did that to laborers and computers did that to clerks, yet people still have jobs.
threeseed · 3 years ago
> it is that they will replace a huge portion of people

And not just any people. Lower to middle class, blue collar people.

The type who are the least able to travel for work, most likely to have families, least able to transition between careers etc. And crucially in many countries the people who often decide elections.

rusticpenn · 3 years ago
and new jobs and industries will be created.
jrumbut · 3 years ago
> automation is not going to fully replace us

You say that the AI gave a bad answer, but it did give an answer right? Really fast? And it was cheaper than convening the panel of experts?

That's the fear of AI replacing humans. It's not that it works so well, it's that it works poorly (but fast and cheap).

malkia · 3 years ago
Good point, and there were several remarkable cases in the past years. Also it really depends with what source that AI was trained.
bergenty · 3 years ago
It doesn’t even work that poorly. DALLE2 came up with better designs for my mobile app than anything mid tier designers on upwork have.
malkia · 3 years ago
If you wonder why we still need humans, check this out - https://www.google.com/search?q=beverly+hills+properties - spotted the issue? maybe an AI could've caught it... or maybe lots of amazon-turk (or like) employed folks that give feedback on things...
cj · 3 years ago
While not exactly aligned to the research, I've been surprised how poor Nest Thermostat's learning feature is.

The main selling point for Nest is having a "learning thermostat". Perhaps my schedule is just not predictable enough, but the auto-generated temperature schedules it generates after its "learning" period is not even close to what I would manually set up on a normal thermostat.

Maybe I'm just an "edge case" or part of the "long tail"

Slackwise · 3 years ago
"Why am I sweating right now? Oh, the Nest set the temperature too high again!"

And then after a few instances, I just turn off all the automation and set up a schedule like normal.

Same with the "away from home" which seems to randomly think I'm away and I have no idea why.

Oh, and the app doesn't show me filter reminders, only the actual device, which I never touch all the way downstairs. There's not even any status to let me know if it's accepted a new dialed-in temperature, as I've had it fail to capture a request, and then I go back, and see it never updated/saved the new temp. Just zero feedback to confirm that the thermostat has responded to any input, and zero notification from the app if this happens.

Just thoroughly unimpressed.

Thankfully I didn't buy this junk, as it was pre-installed by the owner of my rental. Can't imagine actually paying for something that's only real feature is being able to remotely control my temperature once in a while.

mbesto · 3 years ago
I've always heard this, and so when I went for my first smart thermometer I went straight to Ecobee (which I'm very happy with btw).

So I gotta ask HN...what the heck was so popular about Nests?! It's one thing to be go after shiny lures like new iPhone apps or luxury items...but a Thermostat?!

Mind boggling...

Melatonic · 3 years ago
The problem I think with all of these ML functions is that there is never enough of an in between full manual and full auto magic. The Nest could simply ask you a few times a day how you feel about how its doing (too hot ? too cold? too expensive?) or you could opt in somewhere and to give feedback. And then it could keep doing its behind the scenes mumbo jumbo.
TaupeRanger · 3 years ago
Same story. We moved into a house that had a Nest preinstalled. Got everything set up, and noticed after a couple of days we would always wake up freezing in the early morning. Nest was all over the place and I just turned off the automation.
bryanrasmussen · 3 years ago
>And then after a few instances, I just turn off all the automation and set up a schedule like normal.

If you have a fairly regular life I would think a schedule would outdo ML pretty much all the time, because you know exactly what that schedule should be. ML might be useful for a secret agent whose life is so erratic that a schedule would be useless.

That is to say ML is maybe better than falling back to nothing.

dreamcompiler · 3 years ago
I bought one back before Google bought the company, because it seemed like a well-designed product with a good UI. In addition to the problems you mentioned, it was constantly updating its firmware. That sometimes bricked the device temporarily and sometimes it changed the UI so I had to relearn how to use the device. One update removed the ability to manually set "away" mode. I finally wised up and reset the thing so it couldn't attach to my wifi any more. Which made the app useless of course.

Then it became clear the thermostat wasn't getting enough power from my 2-wire thermostat transformer and that made it even flakier. I finally threw it away and replaced it with a $20 dumb thermostat, which will still be working fine after the zombie apocalypse. No more Nest products for me.

itissid · 3 years ago
> Same with the "away from home" which seems to randomly think I'm away and I have no idea why.

Away from home should be an easy problem to solve assuming Nest can talk to your phone(which is almost 100% true in real life). IN my experience there are several easy heuristics that can achieve ~ 90% precision and recall in home detection, like are you connected to a wifi or even combining it with some IMU data to be more confident.

HWR_14 · 3 years ago
The ability to remotely activate it is useful in the case of erratic short term rentals. Other than that, I'm not sure of the point
dominotw · 3 years ago
Maybe it considers environmental impact of air conditioning in its models and tries to nudge users into tolerating higher temps.
kayodelycaon · 3 years ago
Things like this are exactly why I went with less "intelligent" smart thermostat. (Honeywell T9)

The only learning feature it has is figuring out how long it takes to heat or cool the house given the current weather. Before a schedule change, can heat or cool the house so it hits next target temperature on time. This seems to work extremely well.

Everything else like schedule and away settings are configured by the user.

Once nice feature is it is fully programmable from the thermostat, without internet. You only need the app for setting a geofence for automatic home/away.

connicpu · 3 years ago
Building my own thermostat so I have total control was a fun project, I learned a lot about electrical engineering and built a circuit with some TRIACs to control the HVAC lines. Though I still need to give it an interface so I can program it some way other than uploading the program as a JSON blob to my raspberry pi!
Melatonic · 3 years ago
Sounds nice. Let the machine figure out something other than people - it will probably be much better at that.
PaulHoule · 3 years ago
When people hear that FAANG is involved in something an "Emperor's Clothes" effect kicks in and people stop making the usual assumption that "if it doesn't work for me it probably doesn't work for other people"

Deleted Comment

fshbbdssbbgdd · 3 years ago
Not only does the Nest ignore my preferences, I think it actually lies about the current temperature.

Example:

Setting is 72, reading is 73. AC is not on, I guess the thermostat is trying to save energy. I lower setting to 71, reading instantly drops to 72! I don’t think it’s a coincidence, this has happened several times.

dreamcompiler · 3 years ago
> Setting is 72, reading is 73. AC is not on, I guess the thermostat is trying to save energy.

This can be explained by hysteresis [0], which all thermostats use to avoid cycling the A/C too fast.

But the second part where the reading drops instantly is strange. Sounds like some kind of software heuristic where they're trying to make the user feel more comfortable about the hysteresis interval. Or something.

[0] https://en.m.wikipedia.org/wiki/Hysteresis

runnerup · 3 years ago
I also hate how Nest only let me download at most 7 days of "historical" data. They have the rest of my historical data, but I can't get a copy of my own data.
amelius · 3 years ago
Presumably they don't want the average consumer to be aware of that fact.
switchbak · 3 years ago
The Ecobee is not really any better. It has various "features" which all end up in setting the temperature at a very uncomfortable setting even when you're home.

If I wanted those kind of savings, I could have just turned down my thermostat myself. Jeesh.

Matt Risinger (youtube expert builder guy) mentioned these are not anywhere near as valuable as they seem, and I'm inclined to agree. It's nice to be able to flick it on vacation mode when you're away I suppose.

I'd still buy it again, nice geeky metrics, and it's a quality company, but it doesn't save me anywhere close to 30% (or whatever the claim was).

nahname · 3 years ago
It is bad. I dislike most "smart" things though, so take my agreement with a grain of salt.
baxtr · 3 years ago
Google destroys any great product they acquire (except google maps and YT I guess).
actusual · 3 years ago
Nah, you're not. I just gave up on mine and have a schedule. I also turned off "pre-cooling" because it would just kick on at like 6pm to "cool" the house for bedtime. I also bought several temperature sensors to use, which are fun. At night I have the thermostat use the sensor in my bedroom, then goes back to the main thermostat during the day.
foobarian · 3 years ago
See the next logical step is to outfit the output vents with servo-controlled actuators so you can fine-tune where the air is going!
ryangittins · 3 years ago
This has been my experience as well. I was very excited to turn on the learning mode when I got it, only to turn it off a few weeks later in favor of scheduling. I've tried turning it on a couple times since then, and I always revert to a simple, manually-programmed schedule.
pid_0 · 3 years ago
Nest is a terrible product. The learning aspect did not work at all and their app always takes forever to refresh. I switched to ecobee and couldn’t be happier. HomeKit and home assistant integration without hassle. Google products are generally quite terrible.
sdoering · 3 years ago
The same for me when I am looking for very specific terms and search engines think the know better and autocorrect me.

Having to make an additional click because I receive something I have never searched for is unnerving.

kumarvvr · 3 years ago
Haha. For current state of AI and ML, 95% of the target users would be categorized as "long tail"
foobarian · 3 years ago
Well, the main selling point when it came out was that it was the iPhone of thermostats. It was the only thermostat at the time that did not have a terrible UI cobbled together by communist residential block designers or people who think that setting your own IRQ pins with jumpers is fun. But yeah I never understood the point of the learning feature; maybe a checkbox that needed to be ticked or a founder's pet feature.
avereveard · 3 years ago
It's mostly to trick people in thinking they don't need as many sensors for a smart home, once the nest is bought the sunk cost fallacy kicks in.
bell-cot · 3 years ago
Or, maybe they invested far more cash and care in marketing that feature than in programming that feature...
ape4 · 3 years ago
You just bought a washing machine... could I interest you in a washing machine?
mdp2021 · 3 years ago
While the chief absurdity is very clear (also mocked by Spitting Image - J.B. on a date: "You loved that steak? Good, I'll order another one!"), I am afraid that the intended idea may be that your memory about the ads of what you just bought will last as much as said goods.

Utter nightmare (unnatural obsolescence, systemic perversity, pollution...) but. I have met R'n'D who admitted the goal was just to have something new to have people want to replace the old, on unsubstantial grounds.

BonoboIO · 3 years ago
Sometimes you want a company like Facebook or google deliver ads to your potential costumer, but you don’t want, that the ad company knows how much product you sold.

You know your ROI, but your ad company doesn‘t.

I heard multiple stories from amazon, that hey still let the ad campaign or targeting running even if you bought the product for random times so external companies could not get insight in your businesses.

If you got the washing machine you will not click to buy another. If you pay per click, it makes no difference how long you let the campaign running if it is targeted.

I work in the ad industry.

Dead Comment

saurik · 3 years ago
College Humor--may it rest in peace :( :( :(--did a great bit about this a few years ago. https://www.youtube.com/watch?v=KbKdKcGJ4tM It is absolutely ridiculous how useless targeted ads actually are... and seeing a real person try to sell you this stuff really underscores the insanity.
maxbond · 3 years ago
College Humor is alive and well! At least if you enjoy Dungeons and Dragons and silly game shows.
jsjohnst · 3 years ago
While this annoys the hell out of me too, there is a semi-logical explanation that makes sense in many cases:

The ad targeting knows you looked at washing machines, but doesn’t know you purchased.

kristopolous · 3 years ago
Which is an extremely trivial check to add - if you got assigned that ticket, you'd probably point it at like 2 or so hours.

However, they've been like this for over a decade so it's likely there intentionally. here's one way that could be possible:

There could be some popular third party service that's integrated on many e-commerce sites that sells this information and doesn't actually give a damn if you bought the refrigerator or not. They're selling you, not the refrigerator.

That's the problem with data brokers, it's mostly low quality data.

bolasanibk · 3 years ago
I cannot remember the reference now, but the reasoning I read was a person who just bought an item x might: 1. return the item if they are not satisfied with it and get a replacement Or 2. buy another one as a gift if they really like it.

Both of these result in a higher fraction of conversions in this kind of targeting vs other targeting criteria.

astrea · 3 years ago
I’m also more inclined to believe that the ad networks simply don’t have a way of knowing a sale was made.
armchairhacker · 3 years ago
I think the reason this happens is that when you start looking for washing machines, you start getting ads for them. Then when you buy nobody tells the ad companies that you just bought a washing machine so they still send you ads because they think you’re still looking. Even if you just went straight to the model site and clicked “buy”.
thaumasiotes · 3 years ago
We know that's not the reason; Amazon is infamous for advertising washing machines to people who have just bought a washing machine from Amazon.
Asooka · 3 years ago
A slightly more ridiculous one I experienced recently was when I searched where to buy tool X. Started getting ads for why I need tool X and why it's the best tool ever. I already want one, I'm looking where to buy, not trying to learn what it's for!
wrycoder · 3 years ago
I buy a package of underwear. All I see for next three weeks on my browser is close ups of men’s briefs.

It’s embarrassing, when associates glance at my screen.

speedgoose · 3 years ago
I agree it can be embarrassing to not use an ad blocker.
jstx1 · 3 years ago
Kind of your fault - you're buying your underwear on a work device and you don't use an adblocker.

Deleted Comment

Gibbon1 · 3 years ago
I merely looked at pressure cookers and got ads for full figured bra's and support garments for six months.
im3w1l · 3 years ago
GPT can solve this! I prompted it with "Sarah bought a washing machine and a ". It completed "dryer.".

Another "If you buy a hammer you might also want to buy " -> "a nail". Ill forgive the singular.

Just to be clear those are not cherry picked - they were my first two attempts.

thaumasiotes · 3 years ago
> GPT can solve this! I prompted it with "Sarah bought a washing machine and a ". It completed "dryer.".

The most natural interpretation there is that Sarah bought a washing machine and a dryer simultaneously, not that, after buying a washing machine the month prior, she was finally ready to buy a dryer.

Mordisquitos · 3 years ago
Fair enough regarding the washer/dryer, but I'm unimpressed by the hammer/nail example. If anything, I might want to buy "some nails" rather than "a nail", but I would argue that's not a brilliant guess either—if I'm buying a hammer I most likely already own the item that I want to hit with it. Some more interesting suggestions could have been for example "some pliers", "a set of screwdrivers", "a pair of work gloves", "safety goggles", etc.
ape4 · 3 years ago
Putting those together... I actually bought a pair of anti hammer arrestors for the washing machine ;)
tomcam · 3 years ago
I can personally vouch that Amazon, Twitter, and YouTube all do horrible horrible jobs predicting my taste. And they have got worse over the years, not better
jltsiren · 3 years ago
My favorite experience with Amazon:

I had just preordered novel 9 of The Expanse, and I got an email recommending something else from the same authors: novel 8 of the Expanse. A more sensible recommendation engine might have assumed that someone who preorders part n+1 of a series may already have part n. Not to mention that Amazon should have known that I already had novel 8 on my Kindle.

I guess generating personalized recommendations at scale is still too expensive. We just get recommendations based on what other customers with vaguely similar tastes were interested in.

alpaca128 · 3 years ago
> Not to mention that Amazon should have known that I already had novel 8 on my Kindle.

Amazon doesn't seem to understand many things surrounding the Kindle. For example, it calculates the progress reading through a book by the last page I looked at. That means if I finished a book and jumped to the introduction it'll now be convinced I only read 1% of the book. This is so dumb, and I don't know why they even do it that way - the Kindle hardware should easily be capable of precisely keeping track of what pages I looked at.

kmnc · 3 years ago
Funny thing is, your very comment is an indirect praise of the very thing they were advertising to you, and here it is being read by thousands of people. Are we so sure absurdly terrible ads don't actually beat out actually good well tailored ones? Looking through the history of radio ads, television ads, it seems like the best ads are always the stupidest. "Head on, apply directly to the forehead!" isn't so far off from "You bought a washing machine? Buy another!". The reality is, advertising optimizes to target stupid people because stupid people spend money. It is easier to trick a moron then sell a smart man something they actually want.
jillesvangurp · 3 years ago
Yep, Spotify, Amazon, Youtube, Google etc. they all use the same three algorithms:

- the more of the same thing algorithm. You clicked this thing, would you like to click it again. And again. And some more.

- the ever popular "we've shown you this thing a hundred times now and you never clicked it; we'll just assume we are right and you are wrong" algorithm.

- the ooooh we've detected that your ip address is in Germany and predict that you are now fluent in German. Would you like some Schnitzel with you Schlager music? This one in particular drives me nuts. I have user profiles with these companies for many years, browser settings that specify a preferred language, etc. I consistently never do anything in German with them. And they'll go ... here's some German content for you. Completely useless and obviously the only criteria they use for these recommendation is location. Worse, if I travel they'll unhelpfully suggest things for those locations as well. Basically, most of their top recommendations are the same generic stuff that they serve to everybody else in the same location.

Recommendation engines are hard and these companies gave up years ago and instead routinely by pass their own AI with some simple if .. else logic. I know how this stuff works. There's a little corner in the UI for the cute AI kids to do their thing but essentially all the prime real estate in their UIs is reserved for good old if else logic, basic profiling, and whatever their marketing department wants to promote to everybody.

If user in Germany recommend generic German stuff. There's no logical explanation other than that for the absolute garbage recommended by default. If you clicked a thing, here's some more things from the same source in random order. Amazon has a notion of books being in a certain order ... so why recommend I start with part 21 of a 50 book series by an author I've never bothered to read? Maybe book 1 would be a better start ... Book series are great value for them because if I get hooked, I consume the whole thing.

Most recommendations are just variations of simple profiling (age, sex, location) that consistently trump actual recommendations combined with very rudimentary similarity algorithms. You don't need AI for any of that. I work with search engine technology, it's not that hard.

semi-extrinsic · 3 years ago
The one thing I've been consistently impressed with is TikTok. If I compare recommendations on YouTube to what I get on my TikTok FYP, it's like comparing a 5-year-old to a college graduate on a math test.

Literally to the point where YouTube never pulls me down into the rabbit hole anymore, I watch one video because it was linked from somewhere else, then I bounce.

numpad0 · 3 years ago
TikTok FYP didn’t seem to work all that well for me, FWIW…
Aerroon · 3 years ago
Part of the reason they're horrible is because people don't have consistent interests. I might be interested in raunchy content right now, but I won't be a few hours later. What determines whether I'm interested in the former is outside of the control of these algorithms - they don't know all of the external events that can change my current mood and preferences. As a result of this it makes sense for people to have many profiles that they switch between, but AI seems incapable of replicating this manual control (so far).

Sometimes I want to watch videos about people doing programming, but usually I don't. When I do though, I would like to easily get into a mode to do just that. Right now that essentially involves switching accounts or hoping random search recommendations are good enough.

thaumasiotes · 3 years ago
> Part of the reason they're horrible is because people don't have consistent interests. I might be interested in raunchy content right now, but I won't be a few hours later. What determines whether I'm interested in the former is outside of the control of these algorithms

I don't think that matters at all. People don't complain that they're getting recommendations that would have been great if they had come in an hour/day earlier or later. When you get a recommendation like that, you consider it a good recommendation.

Instead, they complain that they're getting recommendations for awful content that they wouldn't choose to watch under any circumstances.

Night_Thastus · 3 years ago
Youtube's is actually pretty surprisingly good, in my experience. After years of use, it consistently filters out all the absolute trash I don't want to see, and recommends me things I do actually want. It's not perfect, but it often directs me to channels I wouldn't have heard of otherwise that have solid content.

It often finds a video I would like and throws it on my front page. I avoid it for awhile thinking it wouldn't be a good fit (I don't recognize the creator, bad thumbnail/title, unclear why the content would be of interest to me, etc) but find it was great and I should have watched it days ago.

If I log out of my account, the front page of the site is just awful, makes me want to throw up.

wrycoder · 3 years ago
I think YouTube has given up on figuring me out.

They mostly offer stuff I’ve already watched or stuff on my watch list.

zach_garwood · 3 years ago
Same. For a long time they wanted me to watch angry white guys complaining about pop culture. But all I want to watch are PBS science shows and stuff about ancient history! Eventually, they gave up, and now half the videos they recommend are ones I already watched years ago.
Melatonic · 3 years ago
If you go deep into all the settings you can turn all of amazons predictive jibber jabber off and turn off a ton of tracking. It has been awhile but I swear there were settings hidden everywhere. I just went back to amazon and it still seems to suggest some products based on what I just viewed so now I am wondering what the hell I even turned off.....it is actually fairly accurate now though (I looked at a timex watch and it is suggesting a very similar timex watch)

edit: My bad - it was just suggesting products I had recently previously viewed

hourago · 3 years ago
That may make sense of you are not the average consumer. Optimizing for the most common case makes sense. I see that with Google search prediction, it's good but many times it predicts very sensible words for general use but not in the topic that I'm interested.
aaronax · 3 years ago
ML is there to maximize business income--nothing else.

If ML was benefiting me, it would know that 90% of the time I fire up Hulu I plan to watch the next episode of what I was watching last time. And it would make that a one click action. Instead I have to scroll past promotional garbage...every single time. Assholes.

HWR_14 · 3 years ago
I don't know why you assume the goal is "help aaronax watch what he wants quickly" vs "make sure when aaronax switches to his next series/movie it's on Hulu"
majormajor · 3 years ago
Yeah, in these cases people try to balance for both "engagement" and "retention" (probably called something like "discovery" - the more they can show you about what other options they have, and the more shows you start watching, the less you quit after the first show you watch).

The problem is that they don't have a way of collecting "annoying" metrics as easily as they do those two. So it's a big blind spot in terms of "should we tweak more in favor of one or the other."

mirrorlake · 3 years ago
Customer satisfaction often translates into more dollars, though, because it means they won't cancel their service. I've had the same thought: if only this multi-billion dollar company could figure out that I want to continue watching the show I watched yesterday.
buscoquadnary · 3 years ago
Honestly a lot of this ML to me seems eerily similar to how in older times people would use sheep entrails or crow droppings to try and predict the future. I mean basically that is what ML is, trying to predict the future, the difference is they called it magic, we call it math, but both seem to have about the same outcome, or understandability.
treesprite82 · 3 years ago
> I mean basically that is what ML is, trying to predict the future

If being so reductive, that's also the scientific method. Form a model on some existing data, with the goal of it being predictive on new unseen data. Key is in favoring the more predictive models.

> they called it magic, we call it math, but both seem to have about the same outcome

Find me some sheep entrails that can do this: https://imagen.research.google/

gwbas1c · 3 years ago
> for most of the more interesting consumer decisions, those that are “new” and non-habitual, prediction remains hard

Translation: Computers can't read minds.

A bigger generalization is that, whenever a software feature becomes essentially mind reading; someone's either feeding a hype engine or letting their imagination run away.

The best things to do in that case is to pop the bubble if you can, or walk away. I will often clearly state, "Computers can't read minds. You're making a lot of assumptions that will most likely prove false."

mgraczyk · 3 years ago
Always interesting to see outsiders writing papers about this, using anecdote and unrelated data (mostly political and real world purchase data in this case) to argue that ML doesn't make useful predictions. Meanwhile I look at randomized controlled trial data showing millions of dollars in revenue uplift directly attributable to ML vs non-ML backed conversion pipelines, offsetting the cost of doing the ML by >10x.

It reminds me a lot of other populist folk-science belief, like vaccine hesitancy. Despite overwhelming data to the contrary, a huge portion of the US population believes that they are somehow better off contracting COVID-19 naturally versus getting the vaccine. I think when effect sizes per individual are small and only build up across large populations, people tend to believe whatever aligns best with their identity.

gwbas1c · 3 years ago
> Always interesting to see outsiders writing papers about this, using anecdote and unrelated data (mostly political and real world purchase data in this case) to argue that ML doesn't make useful predictions. Meanwhile I look at randomized controlled trial data showing millions of dollars in revenue uplift directly attributable to ML vs non-ML backed conversion pipelines, offsetting the cost of doing the ML by >10x.

I regularly buy the same brand of toilet paper, socks, and sneakers. Machine learning can predict that.

But, machine learning can't predict that I spent the night at my parents house, really liked the fancy pillow they put on the guest bed, and then had to buy one for myself. (This is essentially the conclusion in the abstract.)

Such a prediction requires mind reading, which is impossible.

mgraczyk · 3 years ago
The key insight missed by this paper (and people from the marketing field in general) is that cases like that are extremely rare compared to easy to predict cases. They don't matter right now at all for most products, from the perspective of marketing ROI.

Also ML can predict that, BTW. Facebook knows you are connected to your parents. If the pillow seller tells Facebook that your parents bought the pillow, then Facebook knows and may choose to show you an ad for that pillow.

mushufasa · 3 years ago
I think you may be conflating the topics and goals of adjacent exercises; predicting consumer behavior is not the same thing as optimizing a conversion pipeline.
mgraczyk · 3 years ago
The examples they give in section two are directly relevant to optimizing conversion pipelines. They pretty clearly intend to be describing something relevant the e-commerce user experience.
semi-extrinsic · 3 years ago
Are you really sure you're not just fooling yourselves with your randomized controlled trials? As Feynman famously said, the easiest person to fool is yourself. And in business even more than science, you might even like the results.

Have you ever put this data up against something similar to the peer review system in academia, where several experts from a competing deparment (or ideally competing company) try to pick your results apart, disprove your hypothesis?

johnthewise · 3 years ago
well, certainly it's possible to fool yourselves with A/B testing, it doesn't mean you must be fooling yourselves. I've also seen similar results in recommendation settings in mobile gaming, not once but over and over again across portfolio of dozens of games/hundreds millions of players. You don't need to predict 20% better on whatever you are predicting to get a 20% increase in LTV and it's even better if you are doing RL since you are optimizing directly for your KPIs
Kiro · 3 years ago
Why are you doubting them? Nothing in the comments indicates what you are suggesting.
nojito · 3 years ago
>Always interesting to see outsiders writing papers about this

I don't think you know who andrew gelman is. Additionally, that's not the conclusion derived from this study.

mgraczyk · 3 years ago
The actual conclusion of the study is so absurd that it's not worth engaging with seriously.

    That is, to maximally understand, and therefore predict, consumer preferences is likely to require information outside of data on choices and behavior, but also on what it is like to be human.
I was responding to the interpretation from the blog post, which is more reasonable.

conformist · 3 years ago
Yes, the review paper appears to be roughly conditioned on "using data that academics can readily access or generate".

Clearly, this doesn't generalise to cases where you have highly specific data (e.g. if you're Google).

However, cases with large societal impact are more likely to be the latter? They may perhaps better be viewed as "conditioned on data that is so valuable that nobody is going to publish or explain it", which kind of is in the complement of the review?

mrxd · 3 years ago
If your ML model is able to predict what consumers are going to buy, the revenue lift would be zero.

Let's say I go to the store to buy milk. The store has a perfect ML model, so they're able to predict that I'm about to do that. I walk into the store and buy the milk as planned. So how does the ML help drive revenue? The store could make my life easier by having it ready for me at the door, but I was going to buy it anyway, so the extra work just makes the store less profitable.

Maybe they know I'm driving to a different store, so they could send me an ad telling me to come to their store instead. But I'm already on my way, so I'll probably just keep going.

Revenue comes from changing consumer behavior, not predicting it. The ideal ML model would identify people who need milk, and predict that they won't buy it.

soared · 3 years ago
This is incorrect. You can predict many things that drive incremental revenue lift.

The simplest: Predict what features a user is most interested in, drive them to that page (increasing their predicted conversion rate) -> purchases that occur now that would not have occurred before.

Similarly: Predict products a user is likely to purchase given they made a different purchase. The user may not have seen these incremental products. For example, users buys orange couch, show them brown pillows.

Like above, the same actually works for entirely unrelated product views. If users views x,y,z products we can predict they will be interested in product w and we can advertise it.

Or we predict a user was very likely to have made a purchase, but hasn’t yet. Then we can take action to advertise to them (or not advertise to them).

johnthewise · 3 years ago
It wouldn't be zero. If you wanted milk but couldn't find it in the store/spent too much, you might just give up on buying it.
qvrjuec · 3 years ago
If the store knows you will want to buy milk, it will have milk in stock according to demand. If it doesn't have a perfect understanding of whether or not people want to buy milk, the store will over/under stock and lose money.
abirch · 3 years ago
Amazon does a remarkably good job of predicting what I'll buy and I frequently add to my purchases.
mrguyorama · 3 years ago
Are you the mythical person buying 15 vacuum cleaners at the same time?
bschne · 3 years ago
I know this is slightly off what the article is concerned with, but the important question in a business context is whether this prediction is worth anything, i.e. whether it can be turned into revenue that wouldn't be generated in the absence of the prediction.
RA_Fisher · 3 years ago
Exactly, RCTs take the mystery out. Nice work!
hourago · 3 years ago
> Sophisticated methods and “big data” can in certain contexts improve predictions, but usually only slightly, and prediction remains very imprecise

The worst part of big data is the data itself. Used to be common will be shared on Facebook webs about "what is your political compass". There results were used to create political profiles of users and targeted propaganda.

You don't need ML to predict the data that there user already has given.