Readit News logoReadit News
skydhash commented on Developer's block   underlap.org/developers-b... · Posted by u/todsacerdoti
hoistbypetard · 11 hours ago
I can feel this block, especially when I'm starting a new project.

Two things that help me:

* have a good boilerplate

* ship things that do nothing

i.e. I find it helps to start a project using my good boilerplate then set up builds and releases (so for web projects, put them online) so that the page doesn't look so blank anymore, and I can see my progress in "releases" even if they're just for me/others contributing.

skydhash · 10 hours ago
I kinda started programming on IDEs (Visual Studio, Eclipse, then Android Studio) and the templates they is kinda a nice way to get quickly started on some projects and not have to worry about configurations. These days, I prefer CLI tooling, so I copy things over from projects on GitHub.
skydhash commented on From M1 MacBook to Arch Linux: A month-long experiment that became permanenent   ssp.sh/blog/macbook-to-ar... · Posted by u/articsputnik
mrtksn · 13 hours ago
macOS is quite popular among tech literate people too, it's almost the default OS for most techies.
skydhash · 11 hours ago
A lot of those only have surface level knowledge about tech, especially reviewers.
skydhash commented on From M1 MacBook to Arch Linux: A month-long experiment that became permanenent   ssp.sh/blog/macbook-to-ar... · Posted by u/articsputnik
herbst · 14 hours ago
Same experience here. I wanted to like it, after all it appears to be exactly what I want. An professional, stable Unix system with enterprise support.

I was and am still surprised that I found nothing of that and even Ubuntu or fedora community look more "enterprise ready" to me these days.

skydhash · 11 hours ago
macOS is actively trying to hide everything unix these days and almost all the good features require building an app to access them (or buying one).
skydhash commented on Developer's block   underlap.org/developers-b... · Posted by u/todsacerdoti
lordnacho · 12 hours ago
I hate to turn everything into a conversation about AI, but this essay maybe explains best what LLMs have done for me recently.

Particularly the first part. I want to add a new feature, but I want to keep things clean. It needs tests, CI, documentation.

It makes exploring new ideas a bit cumbersome, because code tends to create minor distractions that eat up time. You miss a semicolon, or you forget the order of the arguments to a function you just wrote, do you have to flip to another file. Or the test framework needs an update, but the update breaks something so you have to do some changes. It's not just the time either, it's the context switch from the big picture to the very small details, and then back again.

LLM lets me do "one whole step" at a time. Or that's the positive spin on it. Seen another way, I'm further out from the details, and in most things, you have to hit a bit of a wall to really learn it. For senior devs, I lean towards the first, you've already learned what you're going to learn from fixing imports, you are operating on a higher level.

skydhash · 12 hours ago
For me I prefer to keep things granular. Like: add the endpoint; validate the input; return sample data; connect to the db and return something from it;…. It’s easier to go with small wins. I have the plan/design/architecture to keep me pointed in the right direction.
skydhash commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
ACCount37 · a day ago
The math covers the low level decently well, but you run out of it quick. A lot of it fails to scale, and almost all of it fails to capture the high level behavior of modern AIs.

You can predict how some simple narrow edge case neural networks will converge, but this doesn't go all the way to frontier training runs, or even the kind of runs you can do at home on a single GPU. And that's one of the better covered areas.

skydhash · a day ago
You can’t predict because the data is unknown before training. And training is computation based on math. And the results are the weights. And every further computation is also math based. The result can be surprising, but there’s no fairy dust here.
skydhash commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
sgc · a day ago
If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.

It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.

skydhash · a day ago
The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.

Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.

skydhash commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
eru · a day ago
You could more or less use the same reasoning to argue for why humans can't write software.

And you'd be half-right: humans are extremely unreliable, and it takes a lot of safeguards and automated testing and PR reviews etc to get reliable software out of humans.

(Just to be clear, I agree that current models aren't exactly reliable. But I'm fairly sure with enough resources thrown at the problem, we could get reasonably reliable systems out of them.)

skydhash · a day ago
There’s a lot of projects with only one and two people behind it and they produce widely used software. Still have to see big tech producing something actually useful with one of those agents they’re touting about.
skydhash commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
ACCount37 · 2 days ago
Good luck trying to use theory from the 1940s to predict modern ML. And if theory has little predictive power, then it's of little use.

There's a reason why so many "laws" of ML are empirical - curves fitted to experimental observation data. If we had a solid mathematical backing for ML, we'd be able to derive those laws from math. If we had solid theoretical backing for ML, we'd be able to calculate whether a training run would fail without actually running it.

People say this tech is mysterious because it is mysterious. It's a field where practical applications are running far ahead of theory. We build systems that work, and we don't know how or why.

skydhash · a day ago
We have solid backing in maths for it. But the fact is what we are seeking is not what the math told us, but an hope that what it told us is sufficiently close to the TRUTH. Hence the pervasive presence of errors and loss functions.

We know it’s not the correct answer, but better something close than nothing. (close can be awfully far, which is worse than nothing)

skydhash commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
thorum · 2 days ago
My point was more “humans are used to tools that don’t always work and can be used in creative ways” than “no human invention has ever been rigid and reliable”.

People on HN regularly claim that LLMs are useless if they aren’t 100% accurate all the time. I don’t think this is true. We work around that kind of thing every day.

With your examples:

- Before computers, fraud and human error was common in the banking system. We designed a system that was resilient against this and mostly worked, most of the time, well enough for most purposes even though it was built on an imperfect foundation.

- Highly precise clocks are a recent invention. For regular people 200 years ago, one person’s clock would often be 5-10 minutes off from someone else’s. People managed to get things done anyway.

I’ll grant you that Roman aqueducts, seasons and the sun are much more reliable than computers (as are all the laws of nature).

skydhash · a day ago
Isn’t technology the invention of more reliable and precise tools?
skydhash commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
paufernandez · 2 days ago
Yes, and each person has a different perception of what is "good enough". Perfectionists don't like AI code.
skydhash · a day ago
My main reason is: Why should I try twice or more, when I can do it once and expand my knowledge? It's not like I have to produce something now.

u/skydhash

KarmaCake day3923April 24, 2019View Original