Names are just names. It’s nice if they are kind of unique and have no collisions.
But to me it's still unclear what a good naming culture would look like for programmers.
[1] https://en.wikipedia.org/wiki/Astronomical_naming_convention...
This article would certainly disagree with you:
https://en.wikipedia.org/wiki/List_of_U.S._Department_of_Def...
> the Golden Gate Bridge tells you it spans the Golden Gate strait.
Is that even a meaningful distinction? Does anyone think, "Gee, I'd really like to cross the Golden Gate strait?" or do they think "I want to get to Napa?".
> The Hoover Dam is a dam, named after the president who commissioned it, not “Project Thunderfall” or “AquaHold.”
It was actually called the "Boulder Canyon Project" while being built, referred to as "Hoover Dam" even though finished during the Roosevelt administration, officially called "Boulder Dam", and only later officially renamed to "Hoover Dam".
The fact that Herbert Hoover initiated the project tells you nothing meaningful about it. Would "Reitzlib" be a better name than "Requests"?
> If you wrote 100 CLIs, you will never counter with a cobra.
But out in the real world, you could encounter a Shelby Cobra sports car, Bell AH-1 Cobra chopper, USS Cobra (SP-626) patrol boat, Colt Cobra handgun, etc.
> No chemist wakes up and decides to call it “Steve” because Steve is a funny name and they think it’ll make their paper more approachable.
When you open your medicine cabinet, do you look for a jar labeled "acetylsalicylic acid", "2-propylvaleric acid", or "N-acetyl-para-aminophenol"? Probably not.
It's a bad sign when all of the examples in an article don't even agree with the author's point.
We can argue about namespace pollution and overly long names, but I think there's a point there. When I look at other profession's jargon, I never have the impression they are catching Pokemon like programmers do.
Except for the ones with Latin and Greek names, but old mistakes die hard and they're not bragging about their intelligibility.
> Even on a low-quality image, GPT‑5.2 identifies the main regions and places boxes that roughly match the true locations of each component
I would not consider it to have "identified the main regions" or to have "roughly matched the true locations" when ~1/3 of the boxes have incorrect labels. The remark "even on a low-quality image" is not helping either.
Edit: credit where credit is due, the recently-added disclaimer is nice:
> Both models make clear mistakes, but GPT‑5.2 shows better comprehension of the image.
And that's assuming the technical solution is deployed everywhere. I'm in the EU with one of those IDs, and I still had to upload photos of my passport and scan my face to open a bank account. The identification process even had its own app that I had to install.
Turns out that a handful of FAQ answers have a chat widget (with a chatbot, of course) that can be coaxed into switching out to a human. But if your topic is not on the FAQ, the answer doesn't have a chat widget, or you don't randomly click around other topics, you'll never find a contact form.
Even the "complaints" email address found in their legally-mandated Impressum just auto-replies with instructions to use the app help. I've since closed my account, but I'm still amazed how a company holding people's money can shield itself so completely from customers.
> That's something I miss from Windows, at least PowerShell has built-in commands that give you structured output.
It sure is something to disparagingly point to the LoC of 'ss' in one sentence, then pine for both PowerShell and the Windows infrastructure that supports it in the next.
You mentioned processing the output with regexes. That's definitely a code smell, but this is one line of the data from the 'ss' command in question, with fancily-aligned header line included, but with vast tracts of whitespace removed. The regex you pointed out is processing the column whose comma-separated data is enclosed in parens:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
tcp LISTEN 0 666 [::]:22 [::]:* users:(("sshd",pid=1337,fd=7)) ino:1338 sk:2024 cgroup:/openrc.sshd v6only:1 <->
They definitely didn't have to use a regex to process that, but chose to.You could argue that a system that let you write client code that goes something like
socket[i].process.users[0].cmd, socket[i].process.users[0].pid, socket[i].process.users[0].fd
is superior to one that requires writing something that makes use of the moral equivalent of 'cut'. I'd argue two things, one of them informed by my professional experience with PowerShell1) What happens when the "structured" data you rely on changes shape? When that system that produces that "structured" data changes 'users' to 'user_list', 'cmd' to 'local_command', or deletes 'process' and moves 'users' up into its place, you're just as screwed as if 'ss' changed its output format in a way that wasn't backwards-compatible.
2) The core Microsoft tools might all produce "structured" data, but -in my professional experience- so, very, very little "community-provided" PowerShell code does. Why? I don't know for sure, but probably because it's notably more difficult to make a script or library produce that sort of data than it is to just emit regularly-formatted text.
And you're right, PowerShell is far from perfect. I miss some of its design goals, not the whole thing.
And it's a bit sad that in the year of our lord 2025, the best way to get such fundamental information is by using regexes to parse a table[1], generated by a 6000-line C program[2], which is verified by (I hope I'm wrong!) a tiny test suite[3]. OSQuery[4] is also pretty cool, but it builds upon this fragile stack.
That's something I miss from Windows, at least PowerShell has built-in commands that give you structured output.
[1] https://github.com/grigio/network-monitor/blob/9dc470553bfdd...
[2] https://github.com/iproute2/iproute2/blob/main/misc/ss.c
[3] https://github.com/iproute2/iproute2/blob/main/testsuite/tes...
The sqlite, tkinter, and shelve modules are the ones I find most impressive.