Like if I wanted one of these: https://github.com/hwalsuklee/tensorflow-generative-model-co...
Maybe I'm missing some things but:
- Are 1st gen TPUs even accessible ? You have to fill out a form to learn more about those second generation TPUs: https://cloud.google.com/tpu/
- I can't find the source code
This does not look like a scientific paper, but a (very impressive) tech demo.
Deleted Comment
The problem is that the pretend votes need to be culled in order to be predictive. Otherwise they dominate in the arithmetic. They need to be more specific to the user looking at the ranking. Continuing with the Netflix example, if a user was looking for scary movies, the pretend votes need to come from the corpus of all scary movies, rather than all movies that exist.
Here's the problem, there doesn't seem to be a good way to narrow the pretend votes. Worse, there isn't a good way to combine the two. If the pretend votes came from two sources, its not clear what to do. For example, if the user is from California, the California pretend votes (priors?) need to be combined with the scary movie pretend votes.
How can we add pretend votes without justifying where they came from?
Just because we don't know what the true value of p will be doesn't mean we don't have some expectation. If I asked you what you expect the popularity of a given item will be, you won't say 0, you'll say something like the average. So why assume all items will have 0 votes in our model?
[1] https://dandermotj.github.io/post/review-deeplearning-ai-cou...