https://www.theguardian.com/world/2025/aug/26/indonesia-prot...
https://www.theguardian.com/world/2025/aug/26/indonesia-prot...
I see the development advantages of Wayland, but not the practical advantage as a user. And even as a developer, X11 is stable and well known (albeit definitely weird in places).
At the end of the day, things worked perfectly on X11 and my audio and video and various apps still glitch a lot on Wayland even after all these years. Most of that is not exactly Wayland's fault, but it highlights the advantage of X11. It's the devil you know (and everyone has worked out a lot of edge cases for).
Wow, this is really extreme. We certainly got to this point faster than I expected.
However, I disagree that LLMs are anywhere near as good as what's described here for most things I've worked with.
So far, I'm pretty impressed with Cursor as a toy. It's not a usable tool for me, though. I haven't used Claude a ton, though I've seen co-workers use it quite a bit. Maybe I'm just not embracing the full "vibe coding" thing enough and not allowing AI agents to fully run wild.
I will concede that Claude and Cursor have gotten quite good at frontend web development generation. I don't doubt that there are a lot of tasks where they make sense.
However, I still have yet to see a _single_ example of any of these tools working for my domain. Every single case, even when the folks who are trumpeting the tools internally run the prompting/etc, results in catastrophic failure.
The ones people trumpet internally are cases where folks can't be bothered to learn the libraries they're working with.
The real issue is that people who aren't deeply familiar with the domain don't notice the problems with the changes LLMs make. They _seem_ reasonable. Essentially by definition.
Despite this, we are being nearly forced to use AI tooling on critical production scientific computing code. I have been told I should never be editing code directly and been told I must use AI tooling by various higher level execs and managers. Doing so is 10x to 100x slower than making changes directly. I don't have boilerplate. I do care about knowing what things do because I need to communicate that to customers and predict how changes to parameters will affect output.
I keep hearing things described as an "overactive intern", but I've never seen an intern this bad, and I've seen a _lot_ of interns. Interns don't make 1000 line changes that wreck core parts of the codebase despite being told to leave that part alone. Interns are willing to validate the underlying mathematical approximations to the physics and are capable of accurately reasoning about how different approximations will affect the output. Interns understand what the result of the pipeline will be used for and can communicate that in simple terms or more complex terms to customers. (You'd think this is what LLMs would be good at, but holy crap do they hallucinate when working with scientific terminology and jargon.)
Interns have PhDs (or in some cases, are still in grad school, but close to completion). They just don't have much software engineering experience yet. Maybe that's the ideal customer base for some of these LLM/AI code generation strategies, but those tools seem especially bad in the scientific computing domain.
My bottleneck isn't how fast I can type. My bottleneck is explaining to a customer how our data processing will affect their analysis.
(To our CEO) - Stop forcing us to use the wrong tools for our jobs.
(To the rest of the world) - Maybe I'm wrong and just being a luddite, but I haven't seem results that live up to the hype yet, especially within the scientific computing world.
Does that mean Myanmar is now an active zone?
Scientific sensors want as "square" a spectral response as possible. That's quite different than human eye response. Getting a realistic RGB visualization from a sensor is very much an artform.
Offtopic from the security issue, but I wonder if they really get any value out of this "Personality test." It seems like it's just a CAPTCHA that makes sure the applicant knows when to lie correctly.
The one that I just got annoyed with and decided it wasn't worth switching from McD's to Arby's was "would you rather read a book or talk to a person?". I mean, I get it, they want people-focused-people, but being introverted and/or just liking books doesn't mean you can't give excellent customer service.
Sure, it's easy to guess what want most of the time, but the fact that personality tests are as widespread as they are in employment is maddening.
Many years later I worked at Chevron (upstream as an exploration geologist -- not a gas station). While they didn't do it as part of the application process, you were required to take a personality/communication style test when you started (ecolors). That's all well and good (it _is_ very useful to understand personalities for communication styles), but in a lot of roles you literally had to wear the colors on your badge. If you wanted to go into management, you essentially had to score "red over yellow". "Greens" and "blues" were considered to be limited to technical roles and were explicitly not given opportunities to advance, though it took a long time to realize that. I started out thinking "hey, this is actually practical" and then over a few years went to "oh, they're using this to decide who moves up... That's a problem". I asked folks and was told by my manager's manager that ecolors were explicitly used in advancement criteria and who got opportunities to lead projects/etc. That's around the time I left. I hear they've dialed that particular bit back a lot, but it's still very weird to me that it's considered a normal and acceptable practice.
Something in our diets?