https://2035.future-hackernews.workers.dev/news
The page looks much more consistent to the original. Only produced the html as output and the thinking in the comment of the html.
I would be curious how the financial wires got crossed.
I would have assumed residuals were proportional to views, and views valued proportionally as contributing to subscription demand. And it would be a rare viewer to watch one show like that, over & over. I.e. only upside. Something went sideways.
After Year 1, WGA/SAG residual formulas decrease: Year 2: ~80% of Year 1 Year 3: ~55% Year 4+: sometimes stabilize at a “floor” rate
So what did they do? They ran it for a few years, ran the numbers, realized that Westworld was no longer profitable on the platform. (Profitable would have to mean draws enough new subscribers to the platform). AND THEN - Warner Bros. Discovery made new deals with other platforms with ads. I think you can still find Westworld on Tubi and other ad-supported platforms that actually pay Warner licensing fees.
Doesn't match my experience. My colleagues and I are able to turn on or off the gas supply to our houses at will.
That dishwasher was great and lasted over 20 years. The previous owners had definitely abused it and never cleaned it. I repaired it and had about the best dishwasher for a few more years. Eventually the main logic board went out (can't blame it too much, had electrical issues that killed a few things) and a replacement board was going to cost a few hundred dollars in parts even from questionable third party sellers. Seemed to be a good bit to sink on what was a highly abused >20 year old washing machine at the time.
It's pretty interesting how today's cars come with features like remote braking and monitoring cameras, all designed to make driving less demanding for us. So as these researchers work to make vehicles less distracting, these cool features somehow end up making us even more distracted. It's an ironic cycle that leaves you more distracted, and maybe more unsafe.
The absolute value of cosine similarity isn't critical (just the order when comparing multiple candidates), but if you finetune an embeddings model for a specific domain, the model will give a wider range of cosine similarity since it can learn which attributes specifically are similar/dissimilar.
A few months ago I happened to play with OpenAI’s embeddings model (can’t remember which ones) and I was shocked to see that the cosine similarity of most texts was super close, even if the texts had nothing in common. It’s like the wide 0-1 range that USE (and later BERT) were giving me was compressed to perhaps a 0.2 one. Why is that? Does it mean those embeddings are not great for semantic similarity?
But now, like the OpenAI embedding you're talking about the embedding are constrained, trained for retrieval in mind. The pairs are ordered closer, easier to search.