This means that each function only cares about its own error, and how to generate it. And doesn’t require macros. Just thiserror.
I'm yet another child of HyperCard. It opened my mind to what computers could be for, and even though the last two decades have been full primarily of disappointment, I still hold onto that other path as a possibility, or even as a slice of reality---a few weeds growing in the cracks of our dystopian concrete.
LLMs are not massive archives of data. The big models are a few TB in size. No one is forgoing a NYT subscription because they can ask ChatGPT to print out NYT news stories.
In computer programming we often only need the position of the gap to the left, though, so calling it "the rail that starts at x=0" works. Calling it "the rail that ends at x=1" is alright, I guess, if that's what you really want, but leads to more minus ones when you have to sum collections of things.
Fun fact: Swarm was one of the very few non-NeXT/Apple uses of Objective C. We used the GNU Objective C runtime. Dynamic typing was a huge help for multiagent programming compared to C++'s static typing and lack of runtime introspection. (Again, nearly 30 years ago. Things are different now.)
I enjoyed using it around 2002, got introduced via Rick Riolo at the the University of Michigan Center for the Study of Complex Systems. It was a bit of a gateway drug for me from software into modeling, particularly since I was already doing OS X/Cocoa stuff in Objective-C.
A lot of scientific modelers start with differential equations, but coming from object-oriented software ABMs made a lot more sense to me, and learning both approaches in parallel was really helpful in thinking about scale, dimensionality, representation, etc. in the modeling process, as ODEs and complex ABMs—often pathologically complex—represent end points of a continuum.
Tangentially, in one of Rick's classes we read about perceptrons, and at one point the conversation turned to, hey, would it be possible to just dump all the text of the Internet into a neural net? And here we are.
I think you meant second note. :)
That said, I think all-EV is the right move even if it causes some short-term pain. Charging will get there before too long, and this lets GM focus their engineering effort. I imagine they've considered reviving some Volt-like cars to get them through the awkward transition phase.
I'm a late-stage early adopter and love my 2023 Bolt, which is quite popular and inexpensive—my sister got one, and then my parents got one too. They stupidly canceled the Bolt, but were wise enough to un-cancel it for 2025.
But they did make one really weird mistake: abandoning CarPlay. Why would they do that?
Funny that this time this started from the right side of the political spectrum.
https://en.wikipedia.org/wiki/Horseshoe_theory