Dead Comment
Dead Comment
Look I get people use the tools they use and perl is fine, i guess, it does its job, but if you use it you can safely expect to be mocked for prioritizing string operations or whatever perl offers over writing code anyone born after 1980 can read, let alone is willing to modify.
For such a social enterprise, open source orgs can be surprisingly daft when it comes to the social side of tool selection.
Would this tool be harder to write in python? Probably. Is it a smart idea to use it regardless? Absolutely. The aesthetics of perl are an absolute dumpster fire. Larry Wall deserves persecution for his crimes.
Something like this would be justified if the maintainers were unresponsive and it was a remotely exploitable bug. Now it turns out this is probably a minor thing (local privilege escalation if you happen to be running atop as a privileged user).
It seems to me like an irresponsible, egocentric way to handle things.
Maintaining software is hard, but this does not imply a right to be babied. People should simply lower their expectations of security to match reality. Vulnerabilities happen and only extremely rarely do they indicate personal flaws that should be held against the person who introduced it. But it's your job to fix them. Stop complaining.
1. Anecdotally, AI agents feel stuck somewhere circa ~2021. If I install newer packages, Claude will revert to outdated packages/implementations that were popular four years ago. This is incredibly frustrating to watch and correct for. Providing explicit instructions for which packages to use can mitigate the problem, but it doesn't solve it.
2. The unpredictability of these missteps makes them particularly challenging. A few months ago, I used Claude to "one-shot" a genuinely useful web app. It was fully featured and surprisingly polished. Alone, I think it would've taken a couple weeks or weekends to build. But, when I asked it to update the favicon using a provided file, it spun uselessly for an hour (I eventually did it myself in a couple minutes). A couple days ago, I tried to spin up another similarly scoped web app. After ~4 hours of agent wrangling I'm ready to ditch the code entirely.
3. This approach gives me the brazenness to pursue projects that I wouldn't have the time, expertise, or motivation to attempt otherwise. Lower friction is exciting, but building something meaningful is still hard. Producing a polished MVP still demands significant effort.
4. I keep thinking about The Tortoise and The Hare. Trusting the AI agent is tempting because progress initially feels so much faster. At the end of the day, though, I'm usually left with the feeling I'd have made more solid progress with slower, closer attention. When building by hand, I rarely find myself backtracking or scrapping entire approaches. With an AI-driven approach, I might move 10x faster but throw away ~70% of the work along the way.
> These experiences mean that by no stretch of my personal imagination will we have AI that writes 90% of our code autonomously in a year. Will it assist in writing 90% of the code? Maybe.
Spot on. Current environment feels like the self-driving car hype cycle. There have been a lot of bold promises (and genuine advances), but I don't see a world in the next 5 years where AI writes useful software by itself.
> There have been a lot of bold promises (and genuine advances), but I don't see a world in the next 5 years where AI writes useful software by itself.
I actually think the opposite: that within five years, we will be seeing AI one-shot software, not because LLMs will experience some kind of revolution in auditing output, but because we will move the goalposts to ensure the rough spots of AI are massaged out. Is this cheating? Kind of, but any effort to do this will also ease humans accomplishing the same thing.
It's entirely possible, in other words, that LLMs will force engineers to be honest about the ease of tasks they ask developers to tackle, resulting in more easily composable software stacks.
I also believe that use of LLMs will force better naming of things. Much of the difficulty of complex projects comes from simply tracking the existence and status of all the moving parts and the wires that connect them. It wouldn't surprise me at all if LLMs struggle to manage without a clear shared ontology (that we naturally create and internalize ourselves).
I no longer subscribe to Apple Music, but I have 169.5 days of purchased music that I can stream anywhere. Tracks that exactly match an entry in their catalog are presumably just represented as a pointer, and tracks which aren't available in Apple Music are uploaded.
It was half-baked when they shipped it and it still has the same launch bugs. Still, it's better than spotify.
They don't.
Your money goes into a pot of money that also includes every radio, every TV channel, every song used in an ad etc. Every month (or year), your local music association (there's one in every country) sums up the play counts and redistributes the pot accordingly.
This is all to say that the only people that actually get paid from this pot of money are either gonna be domestic musicians or global top 40. Even if your tracks are super famous in clubs and every DJ that plays it accurately reports that, it's still very unlikely you're gonna make the payment cut. Even a relatively minor hit on the radio is valued more than a super-popular song in clubs because it is guaranteed to have a higher play count.
Well that just seems broken.
Deleted Comment
I would download it, but I wanted to be productive today.