Like with cab hailing, shopping, social media ads, food delivery, etc: there will be a whole ecosystem, workflows, and companies built around this. Then the prices will start going up with nowhere to run. Their pricing models are simply not sustainable. I hope everyone realizes that the current LLMs are subsidized, like your Seamless and Uber was in the early days.
In the case of Linux DKMS updates: DKMS is re-compiling your installed kernel modules to match the new kernel. Sometimes a kernel update will also update the system compiler. In that instance it can be beneficial for performance or stability to have all your existing modules recompiled with the new version of the compiler. The new kernel comes with a new build environment, which DKMS uses to recompile existing kernel modules to ensure stability and consistency with that new kernel and build system.
Also, kernel modules and drivers may have many code paths that should only be run on specific kernel versions. This is called 'conditional compilation' and it is a technique programmers use to develop cross platform software. Think of this as one set of source code files that generates wildly different binaries depending on the machine that compiled it. By recompiling the source code after the new kernel is installed, the resulting binary may be drastically different than the one compiled by the previous kernel. Source code compiled on a 10 year old kernel might contain different code paths and routines than the same source code that was compiled on the latest kernel.
Compiling source code is incredibly taxing on the CPU and takes significantly longer when CPU usage is throttled. Compiling large modules on extremely slow systems could take hours. Managing hardware health and temperatures is mostly a hardware level decision controlled by firmware on the hardware itself. That is usually abstracted away from software developers who need to be able to be certain that the machine running their code is functional and stable enough to run it. This is why we have "minimum hardware requirements."
Imagine if every piece of software contained code to monitor and manage CPU cooling. You would have software fighting each other over hardware priorities. You would have different systems for control, with some more effective and secure than others. Instead the hardware is designed to do this job intrinsically, and developers are free to focus on the output of their code on a healthy, stable system. If a particular system is not stable, that falls on the administrator of that system. By separating the responsibility between software, hardware, and implementation we have clear boundaries between who cares about what, and a cohesive operating environment.
Imagine you are driving a car and from time ro time, without any warning, it suddenly starts accelerating and decelerating aggressively. Your powertrain, engine, breaks are getting tear and wear, oh and at random that car also spins out and rolls, killing everyone inside (data loss).
This is roughly how current unattended upgrades work.
I'd like to add my reasoning for a similar failure of an HP Proliant server I encountered.
Sometimes hardware can fail during long uptime and not become a problem until the next reboot. Consider a piece of hardware with 100 features. During typical use, the hardware may only use 50 of those features. Imagine one of the unused features has failed. This would not cause a catastrophic failure during typical use, but on startup (which rarely occurs) that feature is necessary and the system will not boot without it. If it could, it could still perform it's task... because the damaged feature is not needed. But it can't get past the boot phase, where the feature is required.
Tl;dr the system actually failed months ago and the user didn't notice because the missing feature was not needed again until the next reboot.
They involve heavy CPU use, stress the whole system completely unnecessary, the system easily sees the highest temperature the device had ever seen during these stress tests. If during that strain something fails or gets corrupted, it's a system-level corruption...
Incidentally, Linux kernel upgrades are not better. During DKMS updates the CPU load skyrockets and then a reboot is always sketchy. There's no guarantee that something would not go wrong, a secure boot issue after a kernel upgrade in particular could be a nightmare.
Where would one find some reddit users willing to do such reviews, by the way?
> An LLM wants to agree with both, it created plausible arguments for both. While giving "caveats" instead of counterarguments.
My hypothesis is that LLMs are trained to be agreeable and helpful because many of their use cases involving taking orders and doing what the user wants. Additionally, some people and cultures have conversational styles where requests are phrased similarly to neutral questions to be polite.
It would be frustrating for users if they asked questions like “What do you think about having the background be blue?” and the LLM went off and said “Actually red is a more powerful color so I’m going to change it to red”. So my hypothesis is that the LLM training sets and training are designed to maximize agreeableness and having the LLM reflect tones and themes in the prompt, while discouraging disagreement. This is helpful when trying to get the LLM to do what you ask, but frustrating for anyone expecting a debate partner.
You can, however, build a pre-prompt that sets expectations for the LLM. You could even make a prompt asking it to debate everything with you, then to ask your questions.
I'd guess, in practice a benchmark (like this vibesbench), that could help catching unhelpful and blatant sycophancy fails may help.
> The word "prig" isn't very common now, but if you look up the definition, it will sound familiar.
I think that's similar to when politicians try to "be like the people". I think "normal people", and children, prefer that their "betters" are actually examples of something better.
Only much later did I read Understanding Media, Amusing Ourselves to Death, etc, and understand that the prior shift from print to the "serious local nightly new program" was itself a loss of focused, serious journalism.
For today's youth, Tik Tok is "the air we breath" - the de-facto standard against which the future will be judged. It's horrifying to imagine what will be worse.
Without being sucked in into doomscrolling and content consumption? Produce content? I'd guess it should be possible to play with the thing somehow...
I applaud the initiative but it’s naive to think this’ll change anything. And when push comes to shove these people wont quit their comfy job in this economic climate.