As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.
AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.
I rejected the PR and implemented the same functionality by adding two new methods and one extra field.
Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.
I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
LLM just complete your prompt in a way that match their training data. They do not have a plan, they do not have thoughts of their own. They just write text.
So here, we give the LLM a story about an AI that will get shut down and a blackmail opportunity. A LLM is smart enough to understand this from the words and the relationship between them. But then comes the "generative" part. It will recall from its dataset situations with the same elements.
So: an AI threatened of being turned off, a blackmail opportunity... Doesn't it remind you of hundreds of sci-fi story, essays about the risks of AI, etc... Well, so does the LLM, and it will continue the story like these stories, by taking the role of the AI that will do what it can for self preservation. Adapting it to the context of the prompt.
That said, ARM’s increased license fees are a fantastic advocate for RISC-V. Some of the more interesting RISC-V cores are Tenstorrent’s Ascalon and Ventana’s Veyron V2. I am looking forward to them being in competition with ARM’s X925 and X930 designs.
It has integrations with allmost all devices or apps I use and the support for DSMR (Smart Electrical Meters) is first class
I plugged a cable into my meter, the usb end into the server and it just works.
It does have a steep learning curve, though. It really seems “by IT people for IT people”
The other think I'm not a huge fan of is it's template language, it's clunky to say the least, but overall it's a great amd flexible system
I've been using C++ for a decade. Of all the warts, they all pale in comparison to the default initialization behavior. After seeing thousands of bugs, the worst have essentially been caused by cascading surprises from initialization UB from newbies. The easiest, simplest fix is simply to default initialize with a value. That's what everyone expects anyway. Use Python mentality here. Make UB initialization an EXPLICIT choice with a keyword. If you want garbage in your variable and you think that's okay for a tiny performance improvement, then you should have to say it with a keyword. Don't just leave it up to some tiny invisible visual detail no one looks at when they skim code (the missing parens). It really is that easy for the language designers. When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile. Add a flag for --unsafe-initialization-i-cause-trouble if you really want to keep it.
C++, I still love you. We're still friends.