Readit News logoReadit News
arp242 · 8 months ago
> There are also many ficticious names for 64-bit x86, which you should avoid unless you want the younger generation to make fun of you. amd64 refers to AMD’s original implementation of long mode in their K8 microarchitecture, first shipped in their Athlon 64 product. Calling it amd64 is silly and also looks a lot like arm64, and I am honestly kinda annoyed at how much Go code I’ve seen with files named fast_arm64.s and fast_amd64.s. Debian also uses amd64/arm64, which makes browsing packages kind of annoying.

I prefer amd64 as it's so much easier to type and scans so much easier. x86_64 is so awkward.

Bikeshed I guess and in the abstract I can see how x86_64 is better, but pragmatism > purity and you'll take my amd64 from my cold dead hands.

As for Go, you can get the GOARCH/GOOS combinations from "go tool dist list". Can be useful at times if you want to ensure your code cross-compiles in CI.

kowabungalow · 8 months ago
There's genuine AMD_64 and the knock off by their slower competitor who coined the genuine emphasis for the first to market. I don't see what is confusing about that.
peterldowns · 8 months ago
Some other sources of target triples (some mentioned in the article, some not):

rustc: `rustc --print target-list`

golang: `go tool dist list`

zig: `zig targets`

As the article point out, the complete lack of standardization and consistency in what constitutes a "triple" (sometimes actually a quad!) is kind of hellishly hilarious.

lifthrasiir · 8 months ago
> what constitutes a "triple" (sometimes actually a quad!)

It is actually a quintiple at most because the first part, architecture, may contain a version for e.g. ARM. And yet it doesn't fully describe the actual target because it may require an additional OS version for e.g. macOS. Doubly silly.

achierius · 8 months ago
Why would macOS in particular require an OS version where other platforms would not -- just backwards compatibility?
ycombinatrix · 8 months ago
at least we don't have to deal with --build, --host, --target nonsense anymore
rendaw · 8 months ago
You do on Nix. And it's as inconsistently implemented there as anywhere.
psanford · 8 months ago
As a Go developer, I certainly find the complaints about the go conventions amusing. I guess if you have really invested so much into understanding all the details in the rest of this article you might be annoyed that it doesn't translate 1 to 1 to Go.

But for the rest of us, I'm so glad that I can just cross compile things in Go without thinking about it. The annoying thing with setting up cross compilation in GCC is not learning the naming conventions, it is getting the correct toolchains installed and wired up correctly in your build system. Go just ships that out of the box and it is so much more pleasant.

Its also one thing that is great about zig. Using Go+zig when I need to cross compile something that includes cgo in it is so much better than trying to get GCC toolchains setup properly.

cbmuser · 8 months ago
»32-bit x86 is extremely not called “x32”; this is what Linux used to call its x86 ILP324 variant before it was removed.«

x32 support has not been removed from the Linux kernel. In fact, we‘re still maintaining Debian for x32 in Debian Ports.

jcranmer · 8 months ago
I did start to try to take clang's TargetInfo code (https://github.com/llvm/llvm-project/blob/main/clang/lib/Bas...) and porting it over to TableGen, primarily so somebody could actually extract useful auto-generated documentation out of it, like "What are all the targets available?"

I actually do have working code for the triple-to-TargetInfo instantiation portion (which is fun because there's one or two cases that juuuust aren't quite like all of the others, and I'm not sure if that's a bad copy-paste job or actually intentional). But I never got around to working out how to actually integrate the actual bodies of TargetInfo implementations--which provide things like the properties of C/C++ fundamental types or default macros--into the TableGen easily, so that patch is still merely languishing somewhere on my computer.

ComputerGuru · 8 months ago
Great article but I was really put off by this bit, which aside from being very condescending, simply isn't true and reveals a lack of appreciation for the innovation that I would have thought someone posting about target triples and compilers would have appreciated:

> Why the Windows people invented a whole other ABI instead of making things clean and simple like Apple did with Rosetta on ARM MacBooks? I have no idea, but http://www.emulators.com/docs/abc_arm64ec_explained.htm contains various excuses, none of which I am impressed by. My read is that their compiler org was just worse at life than Apple’s, which is not surprising, since Apple does compilers better than anyone else in the business.

I was already familiar with ARM64EC from reading about its development from Microsoft over the past years but had not come across the emulators.com link before - it's a stupendous (long) read and well worth the time if you are interested in lower-level shenanigans. The truth is that Microsoft's ARM64EC solution is a hundred times more brilliant and a thousand times better for backwards (and forwards) compatibility than Rosetta on macOS, which gave the user a far inferior experience than native code, executed (sometimes far) slower, prevented interop between legacy and modern code, left app devs having to do a full port to move to use newer tech (or even just have a UI that matched the rest of the system), and was always intended as a merely transitional bit of tech to last the few years it took for native x86 apps to be developed and take the place (usurp) of old ppc ones.

Microsoft's solution has none of these drawbacks (except the noted lack of AVX support), doesn't require every app to be 2x or 3x as large as a sacrifice to the fat binaries hack, offers a much more elegant solution for developers to migrate their code (piecemeal or otherwise) to a new platform where they don't know if it will be worth their time/money to invest in a full rewrite, lets users use all the apps they love, and maintains Microsoft's very much well-earned legacy for backwards compatibility.

When you run an app for Windows 2000 on Windows 11 (x86 or ARM), you don't see the old Windows 2000 aesthetic (and if you do, there's an easy way for users to opt into newer theming rather than requiring the developer to do something about it) and you aren't stuck with bugs from 30 years ago that were long since patched by the vendor many OS releases ago.

plorkyeran · 8 months ago
The thing named Rosetta (actually Rosetta 2) for the x86_64 -> ARM transition is technologically completely unrelated to the PPC -> x86 Rosetta, and has none of the problems you mention. There's no user-observable difference between a program using Rosetta and a native program in modern macOS, and porting programs which didn't have any assembly or other CPU-arch-specific code was generally just a matter of wrangling your build system.
ComputerGuru · 8 months ago
I addressed that in my response here: https://news.ycombinator.com/item?id=43720758
Zamiel_Snawley · 8 months ago
Do those criticisms of Rosetta hold for Rosetta 2?

I assumed the author was talking about the x86 emulator released for the arm migration a few years ago, not the powerpc one.

ComputerGuru · 8 months ago
They do indeed. Rosetta 2 is lightyears beyond Rosetta when it comes to performance and emulation overhead strategies and benefits from hardware support (and having to do less work just because of fewer differences between the host/target architectures) but still fundamentally relies on the emulation the entirety of the stack. There is almost zero information about its internals disclosed, but from what I understand it still revolves around fat binaries - and necessitates that Apple compiles their frameworks against both x86_64 and arm64. Unlike the MS solution, with Rosetta 2 you cannot call a native ARM64 library from an x86_64 binary, you can't port your code over piece-by-piece, and once Apple decides to no longer ship the next version of xxx framework as a fat binary because they don't want to maintain support for two different architectures in their codebase (wholly understandable), you'll (at best) be left with an older version of said framework that hasn't been patched to address the latest bugs, doesn't behave the same way that newer apps linking against the newer version of the framework do, etc.
Philpax · 8 months ago
This author has a tendency to be condescending about things they find disagreeable. It's why I stopped reading them.
juped · 8 months ago
You have neglected to consider that Microsoft bad; consider how they once did something differently from a Linux distribution I use. (This sentiment is alive and well among otherwise intelligent people; it's embarrassing to read.)
matheusmoreira · 8 months ago
> Go originally wanted to not have to link any system libraries, something that does not actually work

It does work on Linux, the only kernel that promises a stable binary interface to user space.

https://www.matheusmoreira.com/articles/linux-system-calls

lonjil · 8 months ago
FreeBSD does as well, but old ABI versions aren't kept forever.
matheusmoreira · 8 months ago
People have told me that before but I was unable to find official documentation of this fact. Can you point me to it? Closest I found is forum posts claiming the ABI compatibility is good.
damagednoob · 8 months ago
When developing a small program for my Synology NAS in Go, I'm sure I had to target a specific version of glibc.
matheusmoreira · 8 months ago
Probably because the networking libraries use it for name resolution. That's a choice the developers of the Go implementation made. It's not required.
guipsp · 8 months ago
Does it really tho? I've had address resolution break more than once in go programs.
matheusmoreira · 8 months ago
That's because on Linux systems it's typical for domain name resolution to be provided by glibc. As a result, people ended up depending on glibc. They were writing GNU/Linux software, not Linux software.

https://wiki.archlinux.org/title/Domain_name_resolution

https://en.wikipedia.org/wiki/Name_Service_Switch

https://man.archlinux.org/man/getaddrinfo.3

This is user space stuff. You can trash all of this and roll your own mechanism to resolve the names however you want. Go probably did so. Linux will not complain in any way whatsoever.

Linux is the only kernel that lets you do this. Other kernels will break your software if you bypass their system libraries.

vient · 8 months ago
> Kalimba, VE

> No idea what this is, and Google won’t help me.

Seems that Kalimba is a DSP, originally by CSR and now by Qualcomm. CSR8640 is using it, for example https://www.qualcomm.com/products/internet-of-things/consume...

VE is harder to find with such short name.

AKSF_Ackermann · 8 months ago
NEC Vector Engine. Basically not a thing outside supercomputers.
fc417fc802 · 8 months ago
$800 for the 20B-P model on ebay. More memory bandwidth than a 4090. I wonder if llama.cpp could be made to run on it?

I see rumors they charge for the compiler though.