In any case: blindly switching what is essentially a typedef-ed int into _Bool has no business working as expected, since _Bool is a rather quirky type.
In any case: blindly switching what is essentially a typedef-ed int into _Bool has no business working as expected, since _Bool is a rather quirky type.
The earlier ones hit a crane which one could argue was an edge case as a temporary structure. This just hit a building which suggests something much more fundamentally wrong with the tech.
Please be specific on what you mean by "just"? From the article:
> Amazon told CBS Texas that it’s investigating the cause of the crash that happened Wednesday afternoon.
Did it hit a bird? Did the wind blow something into it? Was it a 0.01% occurrence of some hardware failure? Who knows. Design flaw?
Extrapolating a few crashes within this new tech use case to a some fundamental flaw of drone flight isn't reasonable, at the moment.
I suppose a safe alternative would be pneumatic tubes dug to everyone's door. But, only things that are economically feasible can exist in the world. So, instead of perfection, we're left with the iteration and compromise that is engineering, regulations and enforcement to bound it, and insurance to catch the edge cases.
A large part of the FAA regulation around drones is one based on existing in reality, and it's lack of perfection, which is how much damage they can do (this is what limits the weight and speed).
> Option 1) seemed like the easiest one, but it also felt a bit like kicking the can down the road – plus, it introduced the question of which standard to use.
Arguably, that's the sanest one: you can't expect the old C code to follow the rules of the new versions of the language. In a better world, each source file would start with something like
#pragma lang_ver stdc89
and it would automatically kick off the compatibility mode in the newer compilers, but oh well. Even modern languages such as Go miss this obvious solution.On the topic of the article, yeah, sticking anything other than 0 or 1 into C99 bool type is UB. Use ints.
Well, to be pedantic, the entire point of the C standard, and the standard body, is that you should expect it to work, as long as you're working within the standard!
You found room by claiming I have some other opinions. In fact, I originally asked some questions you chose not to answer.
That all begs some more questions: what about my statements isn't factual? What about your statements isn't factual?
I have a few guesses. You may think AI can write a better compiler. You may think AI has already written a better compiler. You may think humans shouldn't write code anymore.
All of those are examples of opinions you might declare, but maybe you meant to say something factual. If those really are the only things you meant to debate, I have to agree I didn't think they were going anywhere and have been done to death. I thought maybe you had something else in mind.
But, if your perspective is immediate, you need to be more precise with your words, to not confuse the reader into thinking that you're extending your observations, that apply only to the present, into the future.
I personally don't find discussions on current capabilities, about something that was fiction some years ago, and has shown a fairly steady rate of increase in utility, all that interesting. I'm an engineer at heart and live and enjoy the iterative process of improvement. As a consequence, I think the present is the boring place, because that's where iteration dies! I don't think we'll entertain each other. ;)
I think it's going to be an amazing shift from those that know intricate details of software to enabling those that have the best ideas that can be implemented with software (a shift from tool makers to tool users).
I think many developers misunderstand the quality of software that people outside of software are willing to live with, if it does exactly what they want when they need it. For a user, it's all black box "do something I want or not" regardless of what's under the hood. Mostly "academic", things like "elegant" and "clean" and "maintainable" almost never practically matter for most practical solutions to actual problems. This is something I learned far too late in my professional career, where the lazy dev with shite code would get the same recognition as the guy that had beautiful code: does it solve the real world problem or not?
Safety critical, secure, etc, sure, but most is not. And, even with those, the libraries/APIs/etc are separate components.
We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.
It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.
Declaring an opinion and then making discussion about it impossible isn't a useful way to communicate or reason about things.
i'm guessing most of the gains we've seen recently are post training rather than pretraining.
But, I naively assume most orgs would opt out. I know some orgs have a proxy in place that will prevent certain proprietary code from passing through!
This makes me curious if, in the allow case, Anthropic is recording generated output, to maybe down-weight it if it's seen in the training data (or something similar)?
Or, is that what was missed? Better silos, with some sort of semi non-community enforcement for the quality of interaction/comment?