I agree that static linking is great and that python sucks but I was trying to say I can, very easily, mkdir new-py-program/app.py and stick __main__ in it or mkdir new-perl-program/app.pl or mkdir my-new-c-file/main.c etc.
For 2/3 of the above I can even make easy/single executable files go-style.
I don't understand your comment on magic comments. You don't need them to cross-compile a program. I was already doing that routinely 10 years ago. All I needed is a `GOOS=LINUX GOARCH=386 go build myprog && scp myprog myserver:`
The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
I get that the feelz are nice and all (just go $subcmd) but.. come on.
You can't do that with python for instance. First, you need a python interpreter on the target machine, and on top of that you need the correct version of the interpreter. If yours is too old or not old enough, things might break. And then, you need to install all the dependencies. The correct version of each, as well. And they might not exist on your system, or conflict with some other lib you have on your target machine.
Same problem with any other interpreted language, including Java and C# obviously.
C/C++ dependency management is a nightmare too.
Rust is slightly better, but there was no production-ready rust 16 years ago (or even 10 years ago).
What do any of these have to do with guarantees of long-term compatibility? I'm not arguing that there should be One Programming Language To Rule Them All, I'm asking about whether we can design better guarantees about long-term compatibility into new programming languages.
That is not counter to what I'm saying.
Mathematical notation <=> Programming Languages.
Proofs <=> Code.
When mathematical notation evolves, old proofs do not become obsolete! There is no analogy to a "breaking change" in math. The closest we came to this was Godel's Incompleteness Theorem and the Cambrian Explosion of new sets of axioms, but with a lot of work most of math was "re-founded" on a set of commonly accepted axioms. We can see how hostile the mathematical community is to "breaking changes" by seeing the level of crisis the Incompleteness Theorem caused.You are certainly free to use a different set of axioms than ZF(C), but you need to be very careful about which proofs you rely on; just as you are free to use a very different programming language or programming paradigm, but you may be limited in the libraries available to you. But if you wake up one morning and your code no longer compiles, that is the analogy to one day mathematicians waking up and realizing that a previously correct proof is now suddenly incorrect -- not that it was always wrong, but that changes in math forced it into incorrectness. It's rather unthinkable.
Of course programming languages should improve, diversify, and change over time as we learn more. Backward-compatible changes do not violate my principle at all. However, when we are faced with a possible breaking change to a programming language, we should think very hard about whether we're changing the original intent and paradigms of the programming language and whether we're better off basically making a new spinoff language or something similar. I understand why it's annoying that Python 2.7 is around, but I also understand why it'd be so much more annoying if it weren't.
Surely our industry could improve dramatically in this area if it cared to. Can we write a family of nested programming languages where core features are guaranteed not to change in breaking ways, and you take on progressively more risk as you use features more to the "outside" of the language? Can we get better at formalizing which language features we're relying on? Better at isolating and versioning our language changes? Better at time-hardening our code? I promise you there's a ton of fruitful work in this area, and my claim is that that would be very good for the long-term health and maturation of our discipline.
I disagree. The development of non-euclidean geometry broke a lot of theorems that were used for centuries but failed to generalize. All of a sudden, parallels could reach each other.
> Can we write a family of nested programming languages where core features are guaranteed not to change in breaking ways, and you take on progressively more risk as you use features more to the "outside" of the language?
We could, the problem is everyone disagrees on what that core should be. Should it be memory-efficient? Fast? Secure? Simple? Easy to formally prove? Easy for beginners? Work on old architecture? Work on embedded architecture? Depending on who you ask and what your goals are, you'll pick a different set of core features, and thus a different notation for your core language.
That's the difference between math & programming languages. Everyone agrees on math's overall purpose. It's a tool to understand, formalise and reason about abstractions. And mathematical notation should make that easier.
That being said, the most serious candidate for your "core language guaranteed not to change and that you can build onto" would be ANSI C. It's been there more more than 35 years, is a standard, is virtually everywhere, you can even write a conforming compiler for a brand new architecture, even an embedded microchip very easily, and most of not all the popular languages nowadays are build on it (C++ of course, but also C#, java, javascript, python, go, php, perl, haskell, rust, all have a C base), and they all use a C FFI. I'm not sure ANSI C was the best thing that ever happened to our industry, though.
> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features
This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.
There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.
Not sure this is the best example. Mathematical notation evolved a lot in the last thousand years. We're not using roman numerals anymore, and the invention of 0 or of the equal sign were incredible new features.
Which, once more, was not happening here. The car was lost, not out of control. And the built-in solution was applied, and worked.
What do you really want here? You've never been in a Uber that took a wrong turn? Never argued with a cab driver about a route? Never been stuck in an airliner at the end of a runway waiting for clearance? Vehicles do things their occupants don't like all the time, and no one freaks out on the internet about it.
> You've never been in a Uber that took a wrong turn? Never argued with a cab driver about a route?
If a cab driver keeps cricling on a parking lot, again and again and again, never finding the obvious exit, I'll get a bit concerned and eventually tell him "OK, never mind, I'll find another way, just drop me there". If he refuses to let me off, and locks the doors, and keeps circling, I'll be extremely anxious. This is horror movie material.
Is it embarassing for Waymo? Sure. But the BBC basically lied to you here, and you bought it. The circumstance you're imagining is not what happened. Just watch the video.
Sounds like a recipe for failure, to be honest. In a potentially life or death situation, the last thing I want is to rely on a remote human being. Plus, if the device entered an erroneous state, I certainly don't trust it to correctly interpret a remote "emergency stop" signal.
If the machine goes crazy (and there is no world where driving around a parking lot until the end of time is the rational expected behavior), the only safe option is a big, red, cut-circuit emergency stop button.
To me this is one of the most underrated qualities of go code.
Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.
I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.