> There are also many ficticious names for 64-bit x86, which you should avoid unless you want the younger generation to make fun of you. amd64 refers to AMD’s original implementation of long mode in their K8 microarchitecture, first shipped in their Athlon 64 product. Calling it amd64 is silly and also looks a lot like arm64, and I am honestly kinda annoyed at how much Go code I’ve seen with files named fast_arm64.s and fast_amd64.s. Debian also uses amd64/arm64, which makes browsing packages kind of annoying.
I prefer amd64 as it's so much easier to type and scans so much easier. x86_64 is so awkward.
Bikeshed I guess and in the abstract I can see how x86_64 is better, but pragmatism > purity and you'll take my amd64 from my cold dead hands.
As for Go, you can get the GOARCH/GOOS combinations from "go tool dist list". Can be useful at times if you want to ensure your code cross-compiles in CI.
There's genuine AMD_64 and the knock off by their slower competitor who coined the genuine emphasis for the first to market. I don't see what is confusing about that.
Some other sources of target triples (some mentioned in the article, some not):
rustc: `rustc --print target-list`
golang: `go tool dist list`
zig: `zig targets`
As the article point out, the complete lack of standardization and consistency in what constitutes a "triple" (sometimes actually a quad!) is kind of hellishly hilarious.
> what constitutes a "triple" (sometimes actually a quad!)
It is actually a quintiple at most because the first part, architecture, may contain a version for e.g. ARM. And yet it doesn't fully describe the actual target because it may require an additional OS version for e.g. macOS. Doubly silly.
As a Go developer, I certainly find the complaints about the go conventions amusing. I guess if you have really invested so much into understanding all the details in the rest of this article you might be annoyed that it doesn't translate 1 to 1 to Go.
But for the rest of us, I'm so glad that I can just cross compile things in Go without thinking about it. The annoying thing with setting up cross compilation in GCC is not learning the naming conventions, it is getting the correct toolchains installed and wired up correctly in your build system. Go just ships that out of the box and it is so much more pleasant.
Its also one thing that is great about zig. Using Go+zig when I need to cross compile something that includes cgo in it is so much better than trying to get GCC toolchains setup properly.
I did start to try to take clang's TargetInfo code (https://github.com/llvm/llvm-project/blob/main/clang/lib/Bas...) and porting it over to TableGen, primarily so somebody could actually extract useful auto-generated documentation out of it, like "What are all the targets available?"
I actually do have working code for the triple-to-TargetInfo instantiation portion (which is fun because there's one or two cases that juuuust aren't quite like all of the others, and I'm not sure if that's a bad copy-paste job or actually intentional). But I never got around to working out how to actually integrate the actual bodies of TargetInfo implementations--which provide things like the properties of C/C++ fundamental types or default macros--into the TableGen easily, so that patch is still merely languishing somewhere on my computer.
Great article but I was really put off by this bit, which aside from being very condescending, simply isn't true and reveals a lack of appreciation for the innovation that I would have thought someone posting about target triples and compilers would have appreciated:
> Why the Windows people invented a whole other ABI instead of making things clean and simple like Apple did with Rosetta on ARM MacBooks? I have no idea, but http://www.emulators.com/docs/abc_arm64ec_explained.htm contains various excuses, none of which I am impressed by. My read is that their compiler org was just worse at life than Apple’s, which is not surprising, since Apple does compilers better than anyone else in the business.
I was already familiar with ARM64EC from reading about its development from Microsoft over the past years but had not come across the emulators.com link before - it's a stupendous (long) read and well worth the time if you are interested in lower-level shenanigans. The truth is that Microsoft's ARM64EC solution is a hundred times more brilliant and a thousand times better for backwards (and forwards) compatibility than Rosetta on macOS, which gave the user a far inferior experience than native code, executed (sometimes far) slower, prevented interop between legacy and modern code, left app devs having to do a full port to move to use newer tech (or even just have a UI that matched the rest of the system), and was always intended as a merely transitional bit of tech to last the few years it took for native x86 apps to be developed and take the place (usurp) of old ppc ones.
Microsoft's solution has none of these drawbacks (except the noted lack of AVX support), doesn't require every app to be 2x or 3x as large as a sacrifice to the fat binaries hack, offers a much more elegant solution for developers to migrate their code (piecemeal or otherwise) to a new platform where they don't know if it will be worth their time/money to invest in a full rewrite, lets users use all the apps they love, and maintains Microsoft's very much well-earned legacy for backwards compatibility.
When you run an app for Windows 2000 on Windows 11 (x86 or ARM), you don't see the old Windows 2000 aesthetic (and if you do, there's an easy way for users to opt into newer theming rather than requiring the developer to do something about it) and you aren't stuck with bugs from 30 years ago that were long since patched by the vendor many OS releases ago.
The thing named Rosetta (actually Rosetta 2) for the x86_64 -> ARM transition is technologically completely unrelated to the PPC -> x86 Rosetta, and has none of the problems you mention. There's no user-observable difference between a program using Rosetta and a native program in modern macOS, and porting programs which didn't have any assembly or other CPU-arch-specific code was generally just a matter of wrangling your build system.
They do indeed. Rosetta 2 is lightyears beyond Rosetta when it comes to performance and emulation overhead strategies and benefits from hardware support (and having to do less work just because of fewer differences between the host/target architectures) but still fundamentally relies on the emulation the entirety of the stack. There is almost zero information about its internals disclosed, but from what I understand it still revolves around fat binaries - and necessitates that Apple compiles their frameworks against both x86_64 and arm64. Unlike the MS solution, with Rosetta 2 you cannot call a native ARM64 library from an x86_64 binary, you can't port your code over piece-by-piece, and once Apple decides to no longer ship the next version of xxx framework as a fat binary because they don't want to maintain support for two different architectures in their codebase (wholly understandable), you'll (at best) be left with an older version of said framework that hasn't been patched to address the latest bugs, doesn't behave the same way that newer apps linking against the newer version of the framework do, etc.
You have neglected to consider that Microsoft bad; consider how they once did something differently from a Linux distribution I use. (This sentiment is alive and well among otherwise intelligent people; it's embarrassing to read.)
People have told me that before but I was unable to find official documentation of this fact. Can you point me to it? Closest I found is forum posts claiming the ABI compatibility is good.
That's because on Linux systems it's typical for domain name resolution to be provided by glibc. As a result, people ended up depending on glibc. They were writing GNU/Linux software, not Linux software.
This is user space stuff. You can trash all of this and roll your own mechanism to resolve the names however you want. Go probably did so. Linux will not complain in any way whatsoever.
Linux is the only kernel that lets you do this. Other kernels will break your software if you bypass their system libraries.
I prefer amd64 as it's so much easier to type and scans so much easier. x86_64 is so awkward.
Bikeshed I guess and in the abstract I can see how x86_64 is better, but pragmatism > purity and you'll take my amd64 from my cold dead hands.
As for Go, you can get the GOARCH/GOOS combinations from "go tool dist list". Can be useful at times if you want to ensure your code cross-compiles in CI.
rustc: `rustc --print target-list`
golang: `go tool dist list`
zig: `zig targets`
As the article point out, the complete lack of standardization and consistency in what constitutes a "triple" (sometimes actually a quad!) is kind of hellishly hilarious.
It is actually a quintiple at most because the first part, architecture, may contain a version for e.g. ARM. And yet it doesn't fully describe the actual target because it may require an additional OS version for e.g. macOS. Doubly silly.
But for the rest of us, I'm so glad that I can just cross compile things in Go without thinking about it. The annoying thing with setting up cross compilation in GCC is not learning the naming conventions, it is getting the correct toolchains installed and wired up correctly in your build system. Go just ships that out of the box and it is so much more pleasant.
Its also one thing that is great about zig. Using Go+zig when I need to cross compile something that includes cgo in it is so much better than trying to get GCC toolchains setup properly.
x32 support has not been removed from the Linux kernel. In fact, we‘re still maintaining Debian for x32 in Debian Ports.
I actually do have working code for the triple-to-TargetInfo instantiation portion (which is fun because there's one or two cases that juuuust aren't quite like all of the others, and I'm not sure if that's a bad copy-paste job or actually intentional). But I never got around to working out how to actually integrate the actual bodies of TargetInfo implementations--which provide things like the properties of C/C++ fundamental types or default macros--into the TableGen easily, so that patch is still merely languishing somewhere on my computer.
> Why the Windows people invented a whole other ABI instead of making things clean and simple like Apple did with Rosetta on ARM MacBooks? I have no idea, but http://www.emulators.com/docs/abc_arm64ec_explained.htm contains various excuses, none of which I am impressed by. My read is that their compiler org was just worse at life than Apple’s, which is not surprising, since Apple does compilers better than anyone else in the business.
I was already familiar with ARM64EC from reading about its development from Microsoft over the past years but had not come across the emulators.com link before - it's a stupendous (long) read and well worth the time if you are interested in lower-level shenanigans. The truth is that Microsoft's ARM64EC solution is a hundred times more brilliant and a thousand times better for backwards (and forwards) compatibility than Rosetta on macOS, which gave the user a far inferior experience than native code, executed (sometimes far) slower, prevented interop between legacy and modern code, left app devs having to do a full port to move to use newer tech (or even just have a UI that matched the rest of the system), and was always intended as a merely transitional bit of tech to last the few years it took for native x86 apps to be developed and take the place (usurp) of old ppc ones.
Microsoft's solution has none of these drawbacks (except the noted lack of AVX support), doesn't require every app to be 2x or 3x as large as a sacrifice to the fat binaries hack, offers a much more elegant solution for developers to migrate their code (piecemeal or otherwise) to a new platform where they don't know if it will be worth their time/money to invest in a full rewrite, lets users use all the apps they love, and maintains Microsoft's very much well-earned legacy for backwards compatibility.
When you run an app for Windows 2000 on Windows 11 (x86 or ARM), you don't see the old Windows 2000 aesthetic (and if you do, there's an easy way for users to opt into newer theming rather than requiring the developer to do something about it) and you aren't stuck with bugs from 30 years ago that were long since patched by the vendor many OS releases ago.
I assumed the author was talking about the x86 emulator released for the arm migration a few years ago, not the powerpc one.
It does work on Linux, the only kernel that promises a stable binary interface to user space.
https://www.matheusmoreira.com/articles/linux-system-calls
https://wiki.archlinux.org/title/Domain_name_resolution
https://en.wikipedia.org/wiki/Name_Service_Switch
https://man.archlinux.org/man/getaddrinfo.3
This is user space stuff. You can trash all of this and roll your own mechanism to resolve the names however you want. Go probably did so. Linux will not complain in any way whatsoever.
Linux is the only kernel that lets you do this. Other kernels will break your software if you bypass their system libraries.
> No idea what this is, and Google won’t help me.
Seems that Kalimba is a DSP, originally by CSR and now by Qualcomm. CSR8640 is using it, for example https://www.qualcomm.com/products/internet-of-things/consume...
VE is harder to find with such short name.
I see rumors they charge for the compiler though.