Yay for "progress" eh? I know the recirculate door servo can stop halfway, just let me control it...
Yay for "progress" eh? I know the recirculate door servo can stop halfway, just let me control it...
Oh, god. It just keeps getting worse. I don't want a fake knob that simulates a real one except it ignores me and decides the fan speed, blend door position, and probably the recirculation door on its own (allowing only a temporary override). I want a fan dial that is electrically connected to the fan, such that the position of the dial directly controls the voltage being sent into the fan motor, with no computer or robotics in between. In other words, a step switch. And I want a slider that is mechanically connected to the blend door for controlling temperature, again, with no computer or robotics in between. And the same for the recirculation door.
There's nothing about large, complex corporate projects that demands that languages impose arbitrary restrictions on code, except the fact that so many corporations insist on hiring incompetent fools as programmers, often to save money in the short run by expanding the potential labor pool. They call them "guardrails", but a better metaphor would be the playpen. If you hire only competent developers, then you don't need to put them in a playpen.
>And I would also add that if you search for "how do you do X in Y language", you'll probably find every combination of a lot of languages so I hardly think that is grounds to dismiss Clojure.
Well yeah, it's pretty much the norm in popular programming languages to make certain things impossible. And programming is driven by fads, so we're going to see more and more of this until it finally goes out of fashion one day and some other fad comes along.
In computing, we emphasize the communicational (i.e. interface) aspects of our code, and, in this respect, tend to focus on an "abstraction"'s role in hiding information. But a good abstraction does more than simply hide detail, it generalizes particulars into a new kind of "object" that is easier to reason about.
If you keep this in mind, you'll realize that having a lot of particulars to identify shared properties that you can abstract away is a prerequisite. The best abstractions I've seen have always come into being only after a significant amount of particularized code had already been written. It is only then that you can identify the actual common properties and patterns of use. Contrarily, abstractions that are built upfront to try and do little more than hide details or to account for potential similarities or complexity, instead of actual already existent complexity are typically far more confusing and poorly designed.
Supporting the third device was handed off to a junior dev. I pointed him at my subclass and said to just do that, we'd figure out the mixing and matching later. But he looked at my subclass like it was written in Greek. He ended up writing his own class that re-imagined the functionality of the superclass and supported the new device (but not the old ones). Integrating this new class into the rest of the codebase would've been nigh impossible, since he also re-implemented some message-handling code, but with only a subset of the original functionality, and what was there was incorrect.
His work came back to me, and I refactored that entire section of the code, and this is when the generalization occurred: Instead of a superclass, I took the stuff that had to be inherited and made that its own thing, having the same interface as before. The device communication part would be modeled as drivers, with a few simple functions that would perform the essential functions of the devices, implemented once per device type. I kept the junior dev's communication code for the new device, but deleted his attempt to re-imagine that superclass. Doing it this way also made it easy to mix and match the devices.
Who are you people who never have to debug TCP problems? I've had to do it on multiple occasions.
That may be, but since Lisp programmers are easily 10x as productive as ordinary mortals you can pay them, say, 5x as much and still get a pretty good ROI.
> you can't just hire any idiot
Yeah, well, if you think hiring any idiot is a winning strategy, far be it for me to stand in your way.
They run into the problem that programming is inherently hard, and no amount of finagling with the language can change that, so you have to have someone on every team with actual talent. But the team can be made of mostly idiots, and some of them can be fired next year if LLMs keep improving.
If you use Lisp for everything, you can't just hire any idiot. You have to be selective, and that costs money. And you won't be able to fire them unless AGI is achieved.
Voila, I was verified as an adult, because I could prove I had a credit card.
The whole point of mandating facial recognition or ID checks isn't to make sure you're an adult, but to keep records of who is consuming those services and tie their identities back to specific profiles. Providers can swear up and down they don't retain that information, but they often use third-parties who may or may not abide by those same requests, especially if the Gov comes knocking with a secret warrant or subpoena.
Biometric validation is surveillance, plain and simple.