My screenplays are heavily influenced by Japanese Anime (which I have researched to a great degree[0]). Some animes have _a lot_ of that kind of dialogue. Sometimes it's just bad writing, but other times it is actually extremely useful.
The times where it is useful are crucial to make a film or show, especially live-action, feel like anime. Thought processes like those presented in the article make it seem like all on-the-nose dialogue is bad and in turn, make my job much harder.
The other problem with it: To me, as an adult, it feels like whoever wrote this made the assumption I'm stupid. This sort of writing is ok, up to a certain degree, for kids. But for adults? A lot of anime are aimed at the younger generations. Anime written for adults are done very differently.
The Matrix is heavily influenced by manga / anime, which you see in quite a few scenes in how they are shot. But many of the explanations that are done are part of the development of Neo, so they never really feel out of place.
Cyberpunk 2077, which does have on the nose dialogue here and there as part of random NPCs spouting stuff. But by and large it tells a story not just through dialogues but also visually. And the visual aspect is so strong that some reviewers completely failed at reviewing the game, they were unable to grasp it. Which is a huge issue, because we are talking about adults here.
This affects brightness and contrast: For emissive displays, you can have emissive values that are several to many orders of magnitude brighter than the 'black point', and more importantly, the primaries are defined by the display, not by ambient illumination.
Part of the magic of HDR displays is manipulating local masking (a human perceptual quirk) to drive bright regions on a display much brighter than the darker regions, so you can achieve even higher contrast ratios than the base technology could achieve (LED back-illuminated LCD panels, for many consumer TVs). Basically, a bright pixel will cause other nearby pixels to be brighter, because you can't see the dark details near a bright region anyway — but other regions could be darker, where you can perceive more detail in the blacks. This is achieved by illuminating sections of the display at significantly higher or lower levels, based on what your eyes/brain can actually perceive. That leads to significantly higher contrast ratios.
(As a heuristic: photographers generally say you can only get ~5 stops of contrast out of a print. (That is, bright areas are 2^5 times brighter than the darkest regions.) Modern HDR displays can do 2^10 or better. YMMV.)
But this also affects color... much of the complexity in getting printers to match derives from the interaction between the imperfect gamut caused by differing primaries, as filtered through human perception (and/or perceptual models). But you can't control the ambient illumination, so you're at the mercy of whatever the spectrum of your illumination is, plus whatever adaptation the viewer has. This feels fundamentally impossible to do "correctly" under all circumstances.
Which is to say, the original sin of color theory is the dimensional collapse from a continuous spectrum to a 3-dimensional, discretized representation. It's a miracle we can see color at all...!
In itself that is correct, but as you've noted, our own vision system isn't operating like that. The same display brightness and colors will be perceived very differently depending on the ambient light's brightness and color, and can also mean a severe breakdown in the dynamic range that can be made visible via a display.
And this ambient light also clearly impacts how prints are seen.