No idea what that means but I no longer want to see another cube for at least a few weeks :P
The idea grew out of helping a friend who leaned on a chatbot during a rough patch. We want to see whether the same empathy can scale. Curious to hear what you think once you've had a look.
Because you haven't been trained of thousands of such story plots in your training data.
It's the most stereotypical plot you can imagine, how can the AI not fall into the stereotype when you've just prompted it with that?
It's not like it analyzed the situation out of a big context and decided from the collected details that it's a valid strategy, no instead you're putting it in an artificial situation with a massive bias in the training data.
It's as if you wrote “Hitler did nothing” to GPT-2 and were shocked because “wrong” is among the most likely next tokens. It wouldn't mean GPT-2 is a Nazi, it would just mean that the input matches too well with the training data.
because they’re not legal entities
My header ended up looking like a permuted version of this:
I never manually configured any of those extra languages in the browser settings. All I had done was tell Chrome not to translate a few pages on some foreign news sites. Chrome then turned those one-off choices into persistent signals attached to every request.I'd be surprised if anyone in my vicinity share my exact combination of languages in that exact order, so this seems like a pretty strong fingerprinting vector.
There was even a proposal to reduce this surface area, but it wasn't adopted:
https://github.com/explainers-by-googlers/reduce-accept-lang...