That doesn’t mean that vision didn’t improve. It’s just a bit odd that if it did, they wouldn’t have quantified it and added that to this paper.
>the majority of patients regained some sight, with some advancing from legally blind to low vision.
That's just aggressive garbage collection happening when you wake up, arguably so you don't risk confusing dreams and reality :-)
But you can somewhat partially work around it by taking notes right after waking up, before doing anything else.
Anyone can generate a big portfolio of projects these days(be it graphics, video, software, writing etc) but blog posts from 2023 and before are proof and undeniable.
Second, tweaking your local setup is more like 'ergonomics' then 'productivity'. Working with more ergonomic setup may yield the same output, but it's more enjoyable.
The efforts in this space by defensive organizations are laudable, but very, very immature. There's this meme that has crossed over into the software space of the planes the come back with a lot of holes in them, indicating the regions where extra armor plating is actually the least important. The commercial spyware industry is a lot like that. Those stories you see of people finding exploits via crash logs and iOS databases? That's the lowest hanging fruit. People who know what they are doing are not leaving traces there. And pretty soon those who don't will stop dropping things there too. It's really, really important to understand that the detections well that these people are sipping from will dry up very soon. The proposed solutions from the talk are not nearly enough to help. Some of the things they're asking for (process lists, for example) are already exposed, but we're currently in the Stone Age of iPhone forensics on the defensive side. Those on offense, who are incentivized by money but also now by necessity, will far outstrip any attempts to catch them after-the-fact :(
To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."
But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.
Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.
Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.
In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.
This is how I know this comment was written by an AI.