Readit News logoReadit News
scott_s commented on I ignore the spotlight as a staff engineer   lalitm.com/software-engin... · Posted by u/todsacerdoti
throwaway894345 · 16 days ago
Yeah, a couple years ago I built a system that undergirded what was at the time a new product but which now generates significant revenue for the company. That system is shockingly reliable to the extent that few at the company know it exists and those who do take its reliability for granted. It's not involved in any cost or reliability fires, so people never really have to think about how impressive this little piece of software really is--the things they don't need to worry about because this software is chugging along, doing its job, silently recovering from connectivity issues, database maintenance, etc without any real issue or maintenance.

It's a little bit of a tragic irony that the better a job you do, the less likely it is to be noticed. (:

scott_s · 15 days ago
Gather metrics and regularly report them.
scott_s commented on What Killed Perl?   entropicthoughts.com/what... · Posted by u/speckx
rdtsc · a month ago
Python killed Perl.

By the time Perl 6 was around, Perl's lunch was already eaten by Python. Only a few table scraps left. Perl 6 would have had to be a better Perl 5 and a better Python 2 to win.

Python came with better batteries and better syntax. It allowed producing code you could read and understand a week later. Perl I found was a write-only language for me. I went back looking at my old Perl code and I couldn't decipher it without some effort.

And Python became popular not just because it was a better Perl, but it attracted folks who used Java and C++. CPU speeds were getting fast enough that you could actually do file and network IO at acceptable speeds without all the `public static void main(String[] args)` and `System.out.println(...)` boilerplate, but still had all the object oriented bits like inheritance and composition with which you could go crazy with if you wanted.

scott_s · a month ago
Agreed. In grad school, I used Perl to script running my benchmarks, post-process my data and generate pretty graphs for papers. It was all Perl 5 and gnuplot. Once I saw someone do the same thing with Python and matplotlib, I never looked back. I later actually started using Python professionally, as I believe lots of other people had similar epiphanies. And not just from Perl, but from different languages and domains.

I think the article's author is implicitly not considering that people who were around when Perl was popular, who were perfectly capable of "understanding" it, actively decided against it.

scott_s commented on Scientist exposes anti-wind groups as oil-funded, now they want to silence him   electrek.co/2025/08/25/sc... · Posted by u/xbmcuser
testhest · 4 months ago
Wind is only useful up to a point, once it gets above 20% of generation capacity ensuing grid stability becomes expensive either through huge price swings or grid level energy storage.
scott_s · 4 months ago
That's true of all renewable energy sources. So we should take advantage of all of them, as much as is feasible.
scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
snowfield · 4 months ago
AI training doesn't work like that. you don't train it on context, you train it on recognition and patterns.
scott_s · 4 months ago
You train on data. Context is also data. If you want a model to have certain data, you can bake it into the model during training, or provide it as context during inference. But if the "context" you want the model to have is big enough, you're going to want to train (or fine-tune) on it.

Consider that you're coding a Linux device driver. If you ask for help from an LLM that has never seen the Linux kernel code, has never seen a Linux device driver and has never seen all of the documentation from the Linux kernel, you're going to need to provide all of this as context. And that's both going to be onerous on you, and it might not be feasible. But if the LLM has already seen all of that during training, you don't need to provide it as context. Your context may be as simple as "I am coding a Linux device driver" and show it some of your code.

scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
jimbokun · 4 months ago
Why haven’t the bug AI companies been pursuing that approach, vs just ramping up context window size?
scott_s · 4 months ago
Because training one family of models with very large context windows can be offered to the entire world as an online service. That is a very different business model from training or fine-tuning individual models specifically for individual customers. Someone will figure out how to do that at scale, eventually. It might require the cost of training to reduce significantly. But large companies with the resources to do this for themselves will do it, and many are doing it.
scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
ehnto · 4 months ago
> If you're completely new to the problem then ... yes, it does.

Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt. I am not really trying to find a fair comparison because I believe humans have an unfair advantage in this instance, and am trying to make that point, rather than compare like for like abilities. I think we'll find even with all the context clues from MCPs and history etc. they might still fail to have the insight to recall the right data into the context, but that's just a feeling I have from working with Claude Code for a while. Because I instruct it to do those things, like look through git log, check the documentation etc, and it sometimes finds a path through to an insight but it's just as likely to get lost.

I alluded to it somewhere else but my experience with massive context windows so far has just been that it distracts the LLM. We are usually guiding it down a path with each new prompt and have a specific subset of information to give it, and so pumping the context full of unrelated code at the start seems to derail it from that path. That's anecdotal, though I encourage you to try messing around with it.

As always, there's a good chance I will eat my hat some day.

scott_s · 4 months ago
> Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt.

That is true for the LLMs you have access to now. Now imagine if the LLM had been trained on your entire code base. And not just the code, but the entire commit history, commit messages and also all of your external design docs. And code and docs from all relevant projects. That LLM would not be new to the problem every prompt. Basically, imagine that you fine-tuned an LLM for your specific project. You will eventually have access to such an LLM.

scott_s commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
mplanchard · 7 months ago
Well, generally, that’s just not how hype works.

A thing being great doesn’t mean it’s going to generate outsized levels of hype forever. Nobody gets hyped about “The Internet” anymore, because novel use cases aren’t being discovered at a rapid clip, and it has well and throughly integrated into the general milieu of society. Same with GPS, vaccines, docker containers, Rust, etc., but I mentioned the Internet first since it’s probably on a similar level of societal shift as is AI in the maximalist version of AI hype.

Once a thing becomes widespread and standardized, it becomes just another part of the world we live in, regardless of how incredible it is. It’s only exciting to be a hype man when you’ve got the weight of broad non-adoption to rail against.

Which brings me to the point I was originally trying to make, with a more well-defined set of terms: who cares if someone waits until the tooling is more widely adopted, easy to use, and somewhat standardized prior to jumping on the bandwagon? Not everyone needs to undergo the pain of being an early adopter, and if the tools become as good as everyone says they will, they will succeed on their merits, and not due to strident hype pieces.

I think some of the frustration the AI camp is dealing with right now is because y’all are the new Rust Evangelism Strike Force, just instead of “you’re a bad software engineer if you use a memory unsafe languages,” it’s “you’re a bad software engineer if you don’t use AI.”

scott_s · 7 months ago
The tools are at the point now that ignoring them is akin to ignoring Stack Overflow posts. Basically any time you'd google for the answer to something, you might as well ask an AI assistant. It has a good chance of giving you a good answer. And given how programming works, it's usually easy to verify the information. Just like, say, you would do with a Stack Overflow post.
scott_s commented on Exploiting Undefined Behavior in C/C++ Programs: The Performance Impact [pdf]   web.ist.utl.pt/nuno.lopes... · Posted by u/luu
KerrAvon · 8 months ago
At FAANG scale, you absolutely want to have a pass before deployment that does this or you're leaving money on the table.
scott_s · 8 months ago
It's not as obvious a win as you may think. Keep in mind that for every binary that gets deployed and executed, it will be compiled many more times before and after for testing. For some binaries, this number could easily reach the hundreds of thousands of times. Why? In a monorepo, a lot of changes come in every day, and testing those changes involves traversing a reachability graph of potentially affected code and running their tests.
scott_s commented on Why is Warner Bros. Discovery putting old movies on YouTube?   tedium.co/2025/02/05/warn... · Posted by u/shortformblog
scott_s · 10 months ago
Murder in the First is one of them, and it is a long favorite of mine: https://www.youtube.com/watch?v=X42yOL5Ah4E&list=PL7Eup7JXSc...

It has the best performance I've ever seen by Kevin Bacon, and a solid performance from Christian Slater. Gary Oldman is a solid villian. R. L. Emery does his usual thing, but he's really good at that usual thing. I think about lines and ideas from it frequently. Granted, this is partly because the movie came out when I was 15 and I watched it a formative age with friends. But I've also watched it recently, and I think it holds up.

scott_s commented on C: Simple Defer, Ready to Use   gustedt.wordpress.com/202... · Posted by u/ingve
Night_Thastus · a year ago
I really, really do not like what that could do to the readability of code if it was used liberally.

It's like GOTOs, but worse, because it's not as visible.

C++'s destructors feel like a better/more explicit way to handle these sorts of problems.

scott_s · a year ago
Look at the Linux kernel. It uses gotos for exactly this purpose, and it’s some of the cleanest C code you’ll ever read.

C++ destructors are great for this, but are not possible in C. Destructors require an object model that C does not have.

u/scott_s

KarmaCake day34094March 2, 2008
About
Computer science research and systems software development.

http://www.scott-a-s.com

View Original