It’s a little ridiculous to reframe that a significant part of my education was an exercise in copying information over by hand, but it’s just true that this method reliably worked for me.
Also: my reading speed was ungodly slow. I think I considered it typical to spend 3 hours on 10 textbook pages. Sometimes it took longer. But the information stuck, and I knew it well.
I wonder if my contrary experience is linked to my being mostly aphantasic and also lacking an internal monologue. Verbal input and output are activities I have to engage which takes me out of my default mode of thinking. And they are somewhat mutually-exclusive. Roughly speaking, it is like I have different mental postures for these. I think easily in a "resting" state. Figuratively, I have to "sit up" (for reading) or "stand up" (for listening). To write or speak, I go further into a variant "fighting" posture, e.g. getting myself centered and my reflexes cranked up more.
Also, I feel like anything I really learn is merged into my unified "world model" almost immediately or with a very short latency. But, I have very poor rote memory. I don't memorize what I hear or read. I extend my understanding and then can speak from that understanding later, in my own words. I do best when I can learn something abstractly and synthesize a bunch of related ideas from that understanding. I can infer my own abstractions, but I need to do so rapidly before I lose the examples being communicated.
I struggle when there is an expectation to memorize disconnected examples and defer the abstraction. If I don't generally understand new content in real-time as I listen or read, it is just noise. I cannot recall content I didn't understand in order to figure it out later. I only retain the meta-memory that I was exposed to and rejected some arbitrary noise...
I think you want a metaphor that doesn't also depend on its literal meaning.
Each such scalar operation is on a fixed width primitive number, which is where we get into the questions of what numeric types the hardware supports. E.g. we used to worry about 32 vs 64 bit support in GPUs and now everything is worrying about smaller widths. Some image processing tasks benefit from 8 or 16 bit values. Lately, people are dipping into heavily quantized models that can benefit from even narrower values. The narrower values mean smaller memory footprint, but also generally mean that you can do more parallel operations with "similar" amounts of logic since each ALU processes fewer bits.
Where this lane==ALU analogy stumbles is when you get into all the details about how these ALUs are ganged together or in fact repartitioned on the fly. E.g. a SIMD group of lanes share some control signals and are not truly independent computation streams. Different memory architectures and superscalar designs also blur the ability to count computational throughput, as the number of operations that can retire per cycle becomes very task-dependent due to memory or port contention inside these beasts.
And if a system can reconfigure the lane width, it may effectively change a wide ALU into N logically smaller ALUs that reuse most of the same gates. Or, it might redirect some tasks to a completely different set of narrower hardware lanes that are otherwise idle. The dynamic ALU splitting was the conventional story around desktop SIMD, but I think is less true in modern designs. AFAICT, modern designs seem more likely to have some dedicated chip regions that go idle when they are not processing specific widths.
e: I thought I had opted out of everything that was opt-out-able in TMo's privacy settings <https://www.t-mobile.com/privacy-center/dashboard/controls> years ago when I first set up my line/account, but I just checked again and more than half of the settings were enabled. Hate that I have to be in the habit of looking for new settings that default to enabled.
I'm not sure whether to think the Mint MVNO on T-Mobile is better about privacy than T-Mobile. Or do you have some phone apps that are really the guilty party linking your phone number to your travel locations...?
For [2] you have no reference whatsoever. How does AI replace a nurse, a vet, a teacher, a construction worker?
I'm afraid it's really a matter of faith, in either direction, to predict whether an AI can take over the autonomous decision making and robotic systems can take over physical actions which are currently delegated to human professions. And, I think many robotic control problems are inherently solved if we have sufficient AI advancement.
After all, I can always pick up LLMs in the future. If a few weeks is long enough for all my priors to become stale, why should I have to start now? Everything I learn will be out of date in a few weeks. Things will only be easier to learn 6, 12, 18 months from now.
Also no where in my post did I say that LLMs can’t be useful to anyone. In fact I said the opposite. If you like LLMs or benefit from them, then you’re probably already using them, in which case I’m not advocating anyone stop. However there are many segments of people who LLMs are not for. No tool is a panacea. I’m just trying to nip and FUD in the butt.
There are so many demands for our attention in the modern world to stay looped in and up to date on everything; I’m just here saying don’t fret. Do what you enjoy. LLMs will be here in 12 months. And again in 24. And 36. You don’t need to care now.
And yes I mentor several juniors (designers and engineers). I do not let them use LLMs for anything and actively discourage them from using LLMs. That is not what I’m trying to do in this post, but for those whose success I am invested in, who ask me for advice, I quite confidently advise against it. At least for now. But that is a separate matter.
EDIT: My exact words from another comment in this thread prior to your comment:
> I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy.
How does someone reconcile a faith that AI tooling is rapdily improving with that contradictory belief that there is some permanent early-adopter benefit?
"Starlink also states that "Standby Mode is not intended for constant, maritime, or high-bandwidth use," although the terms do not explicitly prohibit this, and we don't know if or how Starlink would enforce this intention.
Additionally, Standby mode is only intended for use for 12 months or less. After that, Starlink can, in its discretion, require either a move to a standard plan or loss of all connectivity except for access to the user's Starlink account."
Nothing to get excited here about, then. It's not a plan, per se. It's an add-on. I would not resort to it for IoT, surveillance, etc.
It's also one of the many frequent changes they introduce to their plans, so I would especially not rely on this staying as is for long.
https://www.rvmobileinternet.com/starlink-drops-roam-10gb-pl...
Suspension is a total pause of service at zero recurring cost, for up to 12 months. The enabled rate is a new tier of service for about $8/mo that supports SOS and pay-as-you-go message pricing for any other use.
It is interesting to see that some competition in this area may actually start to redefine the offerings.
In this case, what they're doing is clearly going beyond their lawn and negatively impacting you.
It's weird to suggest that "spraying poison on your neighbors" is deemed acceptable, as long as you're standing on your own property when you do it. If they were standing on their lawn throwing rocks at your apple trees, or shooting a gun at your apples, we wouldn't say they're free to do whatever they like. Heck, we don't even let people play loud music if it disturbs their neighbors.
We really need to update our mental models of harm and violence to account for modern possibilities. We should treat harm from pollution exactly as seriously as we treat harm from projectiles. Dying from cancer from your neighbors incidental pollution is just as bad as dying from a bullet from your neighbors errant gunshot.
I'm actually back in the same California neighborhood I grew up in, which has adjoining open space. In the 50 years of cumulative time my parents or I have been there, we've never needed exterminators. At most, a can of ant spray from the supermarket was sufficient to treat around a door or window in a problematic season. I'm talking about such events once every 5-10 years. Meanwhile, exterminator vans are seen in the neighborhood quite frequently. I think some folks just see a bug, absolutely freak out, and want to nuke it all from orbit.
I think it's nearly a mental illness, how people want to detach from the natural world. As if their self-image is not that of a complex animal but of some sort of sterile abstraction.
And unlike the human who spent multiple hours writing that article, an LLM would have linked to the original study: https://services.google.com/fh/files/misc/measuring_the_envi...
[ETA] Extending on these numbers a bit, a mean human uses 1.25KW of power (Kardashev Level .7 / 8 Gigahumans) and the mean American uses ~8KW of power according to https://en.wikipedia.org/wiki/List_of_countries_by_energy_co.... So if we align AIs to be eco-friendly, they will definitely murder all humans for the sake of the planet /s
Unless your point is that we can kill a bunch of humans to save energy...?