Readit News logoReadit News
scott_s commented on Scientist exposes anti-wind groups as oil-funded, now they want to silence him   electrek.co/2025/08/25/sc... · Posted by u/xbmcuser
testhest · 7 days ago
Wind is only useful up to a point, once it gets above 20% of generation capacity ensuing grid stability becomes expensive either through huge price swings or grid level energy storage.
scott_s · 6 days ago
That's true of all renewable energy sources. So we should take advantage of all of them, as much as is feasible.
scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
snowfield · 20 days ago
AI training doesn't work like that. you don't train it on context, you train it on recognition and patterns.
scott_s · 19 days ago
You train on data. Context is also data. If you want a model to have certain data, you can bake it into the model during training, or provide it as context during inference. But if the "context" you want the model to have is big enough, you're going to want to train (or fine-tune) on it.

Consider that you're coding a Linux device driver. If you ask for help from an LLM that has never seen the Linux kernel code, has never seen a Linux device driver and has never seen all of the documentation from the Linux kernel, you're going to need to provide all of this as context. And that's both going to be onerous on you, and it might not be feasible. But if the LLM has already seen all of that during training, you don't need to provide it as context. Your context may be as simple as "I am coding a Linux device driver" and show it some of your code.

scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
jimbokun · 20 days ago
Why haven’t the bug AI companies been pursuing that approach, vs just ramping up context window size?
scott_s · 20 days ago
Because training one family of models with very large context windows can be offered to the entire world as an online service. That is a very different business model from training or fine-tuning individual models specifically for individual customers. Someone will figure out how to do that at scale, eventually. It might require the cost of training to reduce significantly. But large companies with the resources to do this for themselves will do it, and many are doing it.
scott_s commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
ehnto · 21 days ago
> If you're completely new to the problem then ... yes, it does.

Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt. I am not really trying to find a fair comparison because I believe humans have an unfair advantage in this instance, and am trying to make that point, rather than compare like for like abilities. I think we'll find even with all the context clues from MCPs and history etc. they might still fail to have the insight to recall the right data into the context, but that's just a feeling I have from working with Claude Code for a while. Because I instruct it to do those things, like look through git log, check the documentation etc, and it sometimes finds a path through to an insight but it's just as likely to get lost.

I alluded to it somewhere else but my experience with massive context windows so far has just been that it distracts the LLM. We are usually guiding it down a path with each new prompt and have a specific subset of information to give it, and so pumping the context full of unrelated code at the start seems to derail it from that path. That's anecdotal, though I encourage you to try messing around with it.

As always, there's a good chance I will eat my hat some day.

scott_s · 20 days ago
> Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt.

That is true for the LLMs you have access to now. Now imagine if the LLM had been trained on your entire code base. And not just the code, but the entire commit history, commit messages and also all of your external design docs. And code and docs from all relevant projects. That LLM would not be new to the problem every prompt. Basically, imagine that you fine-tuned an LLM for your specific project. You will eventually have access to such an LLM.

scott_s commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
mplanchard · 3 months ago
Well, generally, that’s just not how hype works.

A thing being great doesn’t mean it’s going to generate outsized levels of hype forever. Nobody gets hyped about “The Internet” anymore, because novel use cases aren’t being discovered at a rapid clip, and it has well and throughly integrated into the general milieu of society. Same with GPS, vaccines, docker containers, Rust, etc., but I mentioned the Internet first since it’s probably on a similar level of societal shift as is AI in the maximalist version of AI hype.

Once a thing becomes widespread and standardized, it becomes just another part of the world we live in, regardless of how incredible it is. It’s only exciting to be a hype man when you’ve got the weight of broad non-adoption to rail against.

Which brings me to the point I was originally trying to make, with a more well-defined set of terms: who cares if someone waits until the tooling is more widely adopted, easy to use, and somewhat standardized prior to jumping on the bandwagon? Not everyone needs to undergo the pain of being an early adopter, and if the tools become as good as everyone says they will, they will succeed on their merits, and not due to strident hype pieces.

I think some of the frustration the AI camp is dealing with right now is because y’all are the new Rust Evangelism Strike Force, just instead of “you’re a bad software engineer if you use a memory unsafe languages,” it’s “you’re a bad software engineer if you don’t use AI.”

scott_s · 3 months ago
The tools are at the point now that ignoring them is akin to ignoring Stack Overflow posts. Basically any time you'd google for the answer to something, you might as well ask an AI assistant. It has a good chance of giving you a good answer. And given how programming works, it's usually easy to verify the information. Just like, say, you would do with a Stack Overflow post.
scott_s commented on Exploiting Undefined Behavior in C/C++ Programs: The Performance Impact [pdf]   web.ist.utl.pt/nuno.lopes... · Posted by u/luu
KerrAvon · 4 months ago
At FAANG scale, you absolutely want to have a pass before deployment that does this or you're leaving money on the table.
scott_s · 4 months ago
It's not as obvious a win as you may think. Keep in mind that for every binary that gets deployed and executed, it will be compiled many more times before and after for testing. For some binaries, this number could easily reach the hundreds of thousands of times. Why? In a monorepo, a lot of changes come in every day, and testing those changes involves traversing a reachability graph of potentially affected code and running their tests.
scott_s commented on Why is Warner Bros. Discovery putting old movies on YouTube?   tedium.co/2025/02/05/warn... · Posted by u/shortformblog
scott_s · 7 months ago
Murder in the First is one of them, and it is a long favorite of mine: https://www.youtube.com/watch?v=X42yOL5Ah4E&list=PL7Eup7JXSc...

It has the best performance I've ever seen by Kevin Bacon, and a solid performance from Christian Slater. Gary Oldman is a solid villian. R. L. Emery does his usual thing, but he's really good at that usual thing. I think about lines and ideas from it frequently. Granted, this is partly because the movie came out when I was 15 and I watched it a formative age with friends. But I've also watched it recently, and I think it holds up.

scott_s commented on C: Simple Defer, Ready to Use   gustedt.wordpress.com/202... · Posted by u/ingve
Night_Thastus · 8 months ago
I really, really do not like what that could do to the readability of code if it was used liberally.

It's like GOTOs, but worse, because it's not as visible.

C++'s destructors feel like a better/more explicit way to handle these sorts of problems.

scott_s · 8 months ago
Look at the Linux kernel. It uses gotos for exactly this purpose, and it’s some of the cleanest C code you’ll ever read.

C++ destructors are great for this, but are not possible in C. Destructors require an object model that C does not have.

scott_s commented on Principles of Educational Programming Language Design   infedu.vu.lt/journal/INFE... · Posted by u/azhenley
cardanome · 9 months ago
> If a student is learning on a teaching language without much adoption, there's just not much else a student can do but use the materials that part of the course.

I think that is the biggest advantage of using a teaching language. You can give them curated, high-quality examples and make sure they not led astray by crap code they find on the internet.

And in the ChatGPT era it forces them to actually learn how to code instead of relying on AI.

The main problem is that student motivation is what drives learning success. Just because something is good for them and theoretically the best way to learn does not mean they will like this. It is difficult to sell young people long term benefits that you can only see after many years of experience.

It seems to me that the majority of students prefers "real world" languages to more elegant teaching languages. I guess it is a combination of suspected career benefits and also not wanting to be patronized with a teaching language. I wish people were more open to learning language for the fun of it but it is what it is. Teach them the languages they want to learn.

scott_s · 9 months ago
I think you overestimate the ability of a tiny community of curators to generate examples to meet the curiosity of students.
scott_s commented on Principles of Educational Programming Language Design   infedu.vu.lt/journal/INFE... · Posted by u/azhenley
taeric · 9 months ago
I would be surprised if your first program was C++? Specifically, getting a decent C++ toolchain that can produce a meaningful program is not a small thing?

I'm not sure where I feel about languages made for teaching and whatnot, yet; but I would be remiss if I didn't encourage my kids to use https://scratch.mit.edu/ for their early programming. I remember early computers would boot into a BASIC prompt and I could transcribe some programs to make screensavers and games. LOGO was not uncommon to explore fractals and general path finding ideas.

Even beyond games and screensavers, MS Access (or any similar offering, FoxPro, as an example) was easily more valuable for learning to program interfaces to data than I'm used to seeing from many lower level offerings. Our industries shunning of interface builders has done more to make it difficult to get kids programming than I think we admit.

Edit to add: Honestly, I think my kids learned more about game programming from Mario Builder at early ages than makes sense.

scott_s · 9 months ago
My first program was indeed C++. In 1998, my high school had a computer lab setup with Turbo C++, and I took a non-AP computer science class. In college, starting in 1999, after entering as a computer science major, we were guided to use Visual C++ on Windows. We got Visual C++ from our department - I can't remember if we paid or if it was just provided to us.

u/scott_s

KarmaCake day34092March 2, 2008
About
Computer science research and systems software development.

http://www.scott-a-s.com

View Original