I still have no idea what the "TPP" thing is...when I look at the other sources I just get things like "you will be banned from being able to modify the save file on your game"...huh really? What if I only support indie developers and publishers who don't enforce arbitrary restrictions? Vote with your purse etc.
And then there are a lot of the "it could", "it might", "possibly"...reeks of the same politics where "death panels" and other bullcrap comes from...following the sources and reading the actual proposals it is much less sinister.
http://whytheheckshouldicareaboutthetpp.com/?f=10&q=6 - The words "unauthorized" should be in there...which has a much less powerful point.
Present a reasonable, scientifically backed argument...then maybe I will listen.
http://www.wikileaksparty.org.au/why-australians-should-be-w...
I just came back just now, just in case it was a co-incidence, and yep, it was not a co-incidence.
Deleted Comment
The way I look at it is, yes the software should keep a history of user behaviour and base its actions off that, but there must be feedback involved, either explicit or implicit. This way, if I gave some input to the system once but then never did so again, the likelyhood that one event should affect the future would diminish over time.
There could be trickiness around "Bubbles" (like a Search bubble, where it only recommends to you things it thinks you'd like, and never shows you other things). I think those are problematic and should be dealt with. But I don't think that means it's impossible to fix. It's just something that needs to be thought through. I don't have an answer for it right now but that doesn't mean there isn't an answer.
Your statement is what I mean. "Thinking things through" should be done during design. Once you have built system, its much harder to compensate for design flaws.
Programming is not designing. Designing is not programming. Fixing bugs is not designing.
You have to design into the UI system a means for it to compensate for changes in user behavior. You don't want a system that takes many uses to train. At the same time you don't want a system that is trained by a single use. For me this is the crux of the problem.
The happy medium that automatically detects deviations from a user's 'normal' behavior _and_ takes the correct action is very hard to design, as it involves AI fuzzy logic.
It's analogous to security flaws. If there is a flaw in the design, no amount of bug fixing will make the system secure, unless that 'bug fixing' changes the design.
This work attempts to draw out the presence of a simulation by looking for tiny violations of known/expected symmetries, which is always worth doing. We should note, however, that an excellent simulation could exactly cover its tracks, lest the underlying architecture cause unintended artifacts.
From a CS perspective, the simulation's developer might create unit tests that guarantee the preservation of symmetries.
A theorist who develops an experimental test that could prove we do not live in a simulation should be lauded. There are experimentalists (like me) just waiting for such guidance. John Bell's theorem [1] was/is a shining guidestar for physics.
That sounds like they would be proving a negative, which is impossible to do.