The answer is not simple, as it has changed over time.
Early anti-lock systems were so limited that the system would, indeed, fail to utilize the maximum possible braking force. This was known, and yet these were deployed, because the research said that maintaining directional control, the primary benefit of anti-lock, had greater safety value than the compromise of maximum braking performance.
Today, however, anti-lock is greatly improved, and anti-lock systems are capable of applying extremely high braking force, even to the point of exceeding thermal design limits (overheat) of the brake system components. The sensors are sampling at higher frequency, the braking models are far more accurate and the computers are faster in current vehicles. So current anti-lock can perform at near the absolute limit.
Further, current systems can actually detect panic. Drivers often fail to even use the full braking force available. Current vehicles can detect when sudden, high braking force is applied, switch into "emergency mode" and boost braking force beyond what the driver is demanding.
These features started appearing in the early 2000's. Nissan, for instance, introduced "Brake Assist" in 2001, with the 2002 the Altima redesign (L31 platform) and the Maxima. It has the "panic mode" behavior I've described.
I personally experienced this once about 15 years ago. Somehow I distracted myself, and when my attention returned I was closing with stopped vehicles at too high a speed and too little remaining distance. Collision was certain and my foot crushed the brake: pure panic, and I never let up. I came to a stop in a blessedly unoccupied left turn lane almost aligned with the stopped vehicles. I recall looking over my hood at the driver of the car I'd nearly hit: between us was a wisp of brake smoke drifting up from passenger fender. I could smell the brakes. The anti-lock did that. If I had had no anti-lock, all directional control would have been lost and my maneuver into the unoccupied lane would have been impossible. If the anti-lock had not applied as much force as it did, I would have been in the intersection, possibly getting t-boned.
So they're pretty good today, and I appreciate modern anti-lock designs.
I'm sure there are users with specialized needs who need something more complex, but i dont think microsoft office is quite the moat it used to be.
You know back when I built my computers, not once did I ever use any kind of static electricity discharge “system”. No wrist strap, no mat, no anything. And I don’t know anybody who did.
Has anybody ever actually destroyed a chip with static electricity?
(Of course it could be the climate I lived in as well)
Modern IC ESD protection is very effective against a few moderate energy events distributed on different pins, and there's a few industry standards that help determine the required amount of caution for dealing with a particular IC (HBM or human-body model, and CDM or charged-device model, are common - targeted toward human assembly procedures and things like triboelectric or inductive charge buildup). In the right climate, a single high energy event is sometimes enough to degrade functionality or (rarely) completely destroy the device, so board assembly and semiconductor manufacturing facilities still require workers to use wrist straps, shoe grounders, mats, treated floors, climate control, etc. Some high voltage GaN work I did years ago required ionizing blowers (basically a spark gap with a fan) because GaN gates are easy to destroy with gate overstress, and there are risks involved with unintended high voltage contact with typical ESD protective solutions. In another embedded-focused lab, the only time I've ever seen someone put on a wrist strap was for handling customer hardware returns. It really depends what you're working with, and in what environment.
I've more frequently (once or twice a year) had devices which exhibit symptoms of something being wrong at the inputs or the outputs, but only on a specific pin or port. For outputs, some symptoms include the output slew rate is inadequate, or the output appears stuck sometimes, or the output has higher than expected voltage noise (though this is a non-exhaustive list). For inputs, the symptoms are more complex - sometimes there's a manifestation at the outputs for amplifiers or other linear circuits, but for feedback systems or digital systems they might behave as though an input is stuck, toggling slowly, etc. which is difficult to distinguish from other, more common errors. I've also directly been the cause of several ESD failures, but in these cases the test objective was to determine the failure thresholds for the system, so I'm not sure that counts.
I've had a customer hardware failure that was eventually traced back to electrical overstress damage on a single pin of an IC near the corner of a board, right where someone might put their thumb if they were holding the board in one hand. In the absence of a better explanation, I suggested this was an ESD failure due to handling error. I never heard about it again, which is weak evidence in favor of a one-off ESD event.
Yeah, don't use these guys. They have a tendency to swap translation direction in the presence of electrical noise, which means your input is now an output, cross-driving something. Sometimes everything survives just fine and switches back on the next edge. Sometimes the magic smoke comes out. And sometimes, if the stars align just wrong, you get an industrial accident.
This is one of those classes of parts that has hidden dangers and really should not be as prominently advertised as it is. They look simple, but they're for experts only. Don't use them unless you really know their failure modes and don't have another reasonable option.
F-strings are great, but trying to remember the minute differences between string interpolation, old-style formatting with %, and new-style formatting with .format(), is sort of a headache, and there's cases where it's unavoidable to switch between them with some regularity (custom __format__ methods, templating strings, logging, etc). It's great that there's ergonomic new ways of doing things, which makes it all the more frustrating to regularly have to revert to older, less polished solutions.
So why is this specific case something that seems to get people out with pitchforks? There are thousands of other cases and they should've been all laughed at until nobody proposes a personality test again. It's all bad and the attention on this one case makes me doubt people genuinely care about profiling and broken hiring in general. (Rather than joining the dei-bad bandwagon)
edit: I might be giving some random episode too much weight. Seems like Elixir and Erlang are actually quite well liked around here
I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.
Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.