First: Prophet is not actually "one model", it's closer to a non-parametric approach than just a single model type. This adds a lot of flexibility on the class of problems it can handle. With that said, Prophet is "flexible" not "universal". A time series of entirely random integers selected from range(0,10) will be handled quite poorly, but fortunately nobody cares about modeling this case.
Second: the same reason that only a small handful of possible stats/ML models get used on virtually all problems. Most problems which people solve with stats/ML share a number of common features which makes it appropriate to use the same model on them (the model's "assumptions"). Applications which don't have these features get treated as edge-cases and ignored, or you write a paper introducing a new type of model to handle it. Consider any ARIMA-type time series model. These are used all the time for many different problem spaces, and are going to do reasonably well on "most" "common" stochastic processes you encounter in "nature", because its constructed to resemble many types of natural processes. It's possible (trivial, even) to conceive of a stochastic process which ARIMA can't really handle (any non-stationary process will work), but in practice most things that ARIMA utterly fails for are not very interesting to model or we have models that work better for that case.
“You can imagine my disappointment when, out-of-the-box, Prophet was beaten soundly by a ‘take the last value’ forecast.”
So on average their predictions may have been pretty good, but since each transaction also depends on the other party to accept their offer, and whether they get outbid, most of their predictions where the offer actually goes through would be on the tail end of where they slightly overestimated the price.
This tweet from the article summed it up nicely
> Zillow made the same mistake that every new quant trader makes early on: Mistaking an adversarial environment for a random one. https://twitter.com/0xdoug/status/1456032851477028870
I was lucky to make and learn from that mistake pretty quickly with some algorithmic trading on much smaller amounts. With housing transactions being much larger and slower, you wouldn't learn this lesson until it was too late. Models never perform as well in practice as they do in theory, and you need to remember to account for both known unknowns and unknown unknowns.
The problem with time series forecasting in general is that they make a lot of assumptions on the shape of your data, and you'll find you're spending a lot of time figuring out mutating your data. For example, they expect that your data comes at a very regular interval. This is fine if it's, say, the data from a weather station. This doesn't work well in clinical settings (imagine a patient admitted into the ER -- there is a burst of data, followed by no data).
That said, there's some interesting stuff out there that I've been experimenting with that seems to be more tolerant of irregular time series and can be quite useful. If you're interested in exchanging ideas, drop me a line (email in my profile).