- Linux 2.6.18+ (arm64 or amd64) i.e. any distro RHEL5 or newer
- MacOS 15.6+ (arm64 or amd64, gpu only supported on arm64)
- Windows 8+ (amd64)
- FreeBSD 13+ (amd64, gpu should work in theory)
- NetBSD 9.2+ (amd64, gpu should work in theory)
- OpenBSD 7+ (amd64, no gpu support)
- AMD64 microprocessors must have SSSE3. Otherwise llamafile will print an error and refuse to run. This means, if you have an Intel CPU, it needs to be Intel Core or newer (circa 2006+), and if you have an AMD CPU, then it needs to be Bulldozer or newer (circa 2011+). If you have a newer CPU with AVX or better yet AVX2, then llamafile will utilize your chipset features to go faster. No support for AVX512+ runtime dispatching yet.
- ARM64 microprocessors must have ARMv8a+. This means everything from Apple Silicon to 64-bit Raspberry Pis will work, provided your weights fit into memory.
I've also tested GPU works on Google Cloud Platform and Nvidia Jetson, which has a somewhat different environment. Apple Metal is obviously supported too, and is basically a sure thing so long as xcode is installed.
But then there's not enough users... so the cycle continues until one day Julia hits critical mass and a tipping point is reached.
Deleted Comment