Deleted Comment
As it’s picking up steam, I’ve been hearing stories recently about how our local “school district decided to ban phones from classrooms” and just yesterday it was “the school will no longer allow food delivery services to drop off food”. Like, educators, WTF, why was that ever an option? In my days long ago, 80s-90s primary school, there was a zero tolerance policy for this stuff. Why was it ever deemed allowable? I can see letting kids keep their phone in their locker or create some storage solution for it. For emergency purposes. But in emergencies, the parent should be able to call the office and they can fetch the kid. It worked just fine in the days of landlines.
It’s hard for me to understand the parenting styles that demanded and allowed this stuff to take place, because I’m sure it was parent driven. But there’s so much else to the parenting styles that are contributing to all this stuff. Banning outdoor play and independence is why they’re online so much and why the arcades and third places all disappeared.
I say all this as a parent of an almost 6 year old boy, doing everything I can to shield him from the wacky parenting style that seems to be the norm and provide him places of community and activities away from screens. He won’t have a phone until he drives, or maybe just a basic flip phone if we think we need a communication line to reach him when he’s a bit older.
This is possibly a bit extreme, imo. In a world that is ever increasingly digital, responsible exposure is without a doubt necessary; However, it seems that one could also inadvertently foster naiveté and ignorance of our digital reality, which has its own potential pitfalls. The "right" answer is probably somewhere in the middle. As usual.
If the delay is long enough, the output does not just feel delayed, but entirely unrelated to the input.
A latency perception test involving a switch can easily be thrown off by a disconnect between the actual point of actuation vs. the end-users perceived point of actuation. For example, the user might feel - especially if exposed to a high system latency - that the switch actuation is after the button has physically bottomed out and squeezed with an increased force as if they were trying to mechanically induce the action, and later be surprised to realize that the actuation point was after less than half the key travel when the virtual latency is removed.
Without knowing the details of the experiment, I think this is a more likely explanation for a perception of negative latency: Not intuitively understanding the input trigger.
It was for this reason that I, and many others, for a short period, got objectively "worse" at the game when we switched to ISDN/Cable and suddenly found ourselves with 20-30ms pings; Our brains were still including the compensating latency when firing.
The only time I normally used a debugger is for post-mortem debugging - looking at a production core file for a multi-threaded process to see where it was in the code (and maybe inspect a few variables) when it crashed. If the core isn't sufficient to isolate the problem, then I'll add more logging to isolate it next time it happens.
During development, printf is just so much more convenient. I'll always put print/log statements in my code as a combination of tracing execution flow and to validate assertions / check that variables have the types of value that I'm expecting. Often this sort of pre-emptive debugging is all you need, and if not it should point to exactly where you need to sprinkle a few more print statements to isolate an issue.
The convenience of print statements is that once you've put them there, at critical points in your code, and printing critical values, they are always there (can conditional them out once code is working, if wanted), as opposed to having to go into a debugger, navigate to points in code, setup breakpoints, monitor variables etc ...
This actually strikes me as a good thing. The more we can get big dumb ads out of meatspace and confine everything to devices, the better, in my opinion (though once they figure out targeted ads in public that could suck).
I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
> A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian...
Also this. It's not dystopian. It's genuinely a better experience than sitting through a single commercial break of a TV show in the 90s (of which I'm sure we all sat through thousands). They blend in. They are easily skippable, they don't dominate near as much of your attention. It's no worse than most of the other stuff competing for your attention. It doesn't seem that difficult to me to navigate a world with background ad radiation. But maybe I'm just a sucker.
You are describing two different advertising strategies that have differing goals. The billboard/tv commercial is a blanket type that serves to foster a default in viewers minds when they consider a particular want/need. Meanwhile, the targeted stuff tries to identify a need you might be likely to have and present something highly specific that could trigger or refine that interest.