Readit News logoReadit News
acka · a year ago
Somewhat off topic but still highly relevant for people who actually want to use projects like this: why oh why do so many build recipes such as Dockerfiles insist on pulling random stuff off the internet as part of the build process? For example, the Dockerfile in this project pulls in two Git repositories and a script at build time.

Besides the obvious build failures on heavily sandboxed build servers with no access to the internet, this forces anyone with even a little concern for security to do a full audit of any build recipes before using them, as merely studying and making available the dependencies listed in READMEs and build manifests like requirements.txt, package.json etc., is no longer enough.

I find this a very worrying development, especially given the rise in critical computer infrastructure failures and supply chain attacks we've seen lately.

amelius · a year ago
I really hate it when projects pull build files from the internet. Usually this happens unexpectedly. Besides the security issues that you mentioned, it also means that packaging software that depends on it becomes much more difficult and prone to unpleasant surprises, like when there is a version issue or when there is simply no internet, and of course the worst nightmare is if the dependency is not available anymore.

Self-contained distribution should be the norm.

doublerabbit · a year ago
The usage of Github within Go projects for dependencies is one of the reasons why I back away from using Go.
dbtablesorrows · a year ago
The answer is churn, my friend.

There's so much churn in devops space that nobody has time to figure out "the correct way" anymore.

nstart · a year ago
In this case, the individual did it for their own research purposes for security stuff. I looked at their profile. They have a demo running a modded version of Doom on a John Deere tractor display. This person definitely takes the time to figure stuff out :D .
BossingAround · a year ago
Well, the correct path forward would be to wait for a large OSS player, like Red Hat, SUSE, Canonical, ..., and make the build secure.

Typically, Fedora and openSUSE have a policy that distributed packages (which includes container images) have to build with only packages from the repository, or explicitly added binaries during the build. So once you can `dnf/zypper install` something (or pull it from the vendor's container registry), you know the artifacts are trusted.

If you need to be on a bleeding edge, you deal with random internet crap shrug.

Of course a random OSS developer won't create offline-ready, trusted build artifacts. They don't have the infrastructure for it. And this is why companies like Red Hat or SUSE exist - a multi-billion dollar corporation is happy to pay for someone to do the plumbing and make a random artifact from the internet a trusted, reproducible, signed artifact, which tracks CVEs and updates regularly.

imglorp · a year ago
How is this different from JS pulling in tens of thousands of dependencies to display a web page?

In the 80s we envisioned modular, reusable software components you drop in like Lego bricks (we called it CASE then), and here we have it, success! Spoiler, it comes with tradeoffs...

judge2020 · a year ago
Probably mostly to retain organization (via separate git repos) - in lieu of cloning stuff in Dockerfile, you end up needing a pre-build instruction of "when you clone use --recursive or do git submodule init to get the other repos into your CWD".
idunnoman1222 · a year ago
Running Mac on linux isn’t even legal* so calm your how are real orgs meant to use this
IAmLiterallyAB · a year ago
As long as it's on Apple hardware it's fine right? Or is there something else
01HNNWZ0MV43FF · a year ago
Because if you had large binaries in your repo, it would grow quickly every time they changed, and GitHub would charge you lots of money, right?
nsonha · a year ago
if you are pulling from a registry then it's already built, so your ci should not fail on docker build dependencies. Or my understanding is wrong?
dboreham · a year ago
People love them some mystery meat.
replete · a year ago
The only chance at GPU acceleration is passing through a supported dGPU (>= AMD RX 6xxx @ 14.x, no chance modern nvidia) with PCI passthrough. Intel iGPUs work up to Comet lake, and some Ice Lake, but anything newer will not work.

Apple Silicon build of MacOS probably not going to be emulatable any time soon, though there is some early work in booting ARM darwin

Also Intel VT-x is missing on AMD, so virtualization is busted on AMD hosts although some crazy hacks with old versions of virtualbox can make docker kind of work through emulation

jeroenhd · a year ago
In theory someone could write a display driver for libvirt/kvm/qemu 3D acceleration, like the ones that exist for Windows and Linux. With those (suboptimal) GPU performance would become available to just about any GPU.

AMD has its own VT-X alternative (AMD-V) that should work just fine. There are other challenges to getting macOS to boot on AMD CPUs, though, usually fixed by loading kexts and other trickery.

I don't really see the point of using Docker for running a full OS. Just distribute an OVA or whatever virtualisation format you prefer. Even a qcow2 with a bash script to start the VM would probably work.

bpye · a year ago
Virtualization.framework can also offer a paravirtual GPU [0] - it would definitely be interesting if someone reverse engineered that…

[0] - https://developer.apple.com/documentation/paravirtualizedgra...

steve1977 · a year ago
> Also Intel VT-x is missing on AMD, so virtualization is busted on AMD hosts

Wouldn’t that work with AMD-V?

replete · a year ago
Nope. There's only ever been Intel x86 apple computers so x86 mac software is Intel specific. Most things work fine on AMD, but some things don't work without hacks, such as digital audio workstations, some adobe applications etc. And you can't run hypervisors on an AMD hackintosh, the work around for docker is to install an old version of virtualbox and make it emulate instead.
zamalek · a year ago
Yeah I would expect it too. As far as I know, AMD has had better luck with hackintoshes and VMacs.
dang · a year ago
Related:

Docker-OSX: Run macOS VM in a Docker - https://news.ycombinator.com/item?id=34374710 - Jan 2023 (110 comments)

macOS in QEMU in Docker - https://news.ycombinator.com/item?id=23419101 - June 2020 (186 comments)

oldandboring · a year ago
I set this up a few months ago as an experiment. Worked pretty well until I discovered that for iMessage to work, the application phones home to Apple using your hardware IDs, and this project uses fake values. At that point I started spiraling down the Great Waterslide of Nope, slowly discovering that the fake values are flagged by Apple and they will, as a consequence, flag your iCloud ID as a potential spammer, limiting your access from other devices. Your only option is to use a hardware ID generator script they vaguely link out to, and you can just keep trying values until you find one that "works", but there's not actually a good signal that you found one that works and isn't harming your iCloud reputation.

Worked really great otherwise, though. Very useful in a pinch.

judge2020 · a year ago
The "keep cycling HWIDs until one works" thing was also common to get Hackintosh iMessage to work, you'd be able to check if it works by going to checkcoverage.apple.com. I quickly realized it's easier to copy the Serial from a old but real Mac.

But I think this tool is more useful for things like build scripts (that rely on proprietary macOS frameworks) more than it is for actually using it like a personal computer.

nixpulvis · a year ago
Pro tip: In general, don’t use your main iCloud account (or other accounts) for security research.

Deleted Comment

xandrius · a year ago
I'd love to try and see if it's possible to simply build for iOS. Say Unity, React Native, etc.

This could be pretty awesome in terms of freedom, even if the build takes 5x more.

shepherdjerred · a year ago
Cross-compiling is likely a better approach: https://github.com/tpoechtrager/osxcross

This is how Godot targets iOS: https://github.com/godotengine/build-containers/blob/main/Do...

Here's a Docker image with the tools preinstalled, though you'll need some tweaks to target iOS: https://github.com/shepherdjerred/macos-cross-compiler

While at RStudio (now called Posit), I worked on cross-compiling C/C++/Fortran/Rust on a Linux host targeting x86_64/aarch64 macOS. If you download an R package with native code from Posit Package Manager (https://p3m.dev/client/), it was cross-compiled using this approach :)

arcanemachiner · a year ago
I did this. I had to share my USB port over Docker somehow (black magic I guess, instructions in the repo) and I was able to build iOS apps and run them on an iPhone.
flawn · a year ago
What was the speed like?
ProfessorZoom · a year ago
That would impressive if you could build for React Native iOS (with native Swift modules) and run it on a simulator in this on a Windows machine
arilotter · a year ago
you can! you can also run it on a real iOS device using this tech. it's explicitly documented in the repo :)
airstrike · a year ago
At glacial speeds, indubitably.
shortformblog · a year ago
I did an interview with Sick Codes a while back where he talked about his approach to this product: https://www.vice.com/en/article/akdmb8/open-source-app-lets-...

Also wanna point out the existence of OSX-PROXMOX, which does something similar for Proxmox home servers: https://github.com/luchina-gabriel/OSX-PROXMOX

I’ve personally been using the latter on my HP Z420 Xeon; it’s very stable, especially with GPU passthrough.

daft_pink · a year ago
This would be awesome to run iCloud sync on my homeserver. Currently, there is no good way to physically backup iCloud on a homeserver/nas, because it only runs on windows/apple.
toomuchtodo · a year ago
This might assist you in syncing this data and then either storing locally or pushing elsewhere for backups:

https://github.com/steilerDev/icloud-photos-sync

https://github.com/icloud-photos-downloader/icloud_photos_do...

teh_hippo · a year ago
I've been working on a solution here that uses OSX-Docker & OSXPhotos. It's getting there, but I wanted a way to back-up all the info in iCloud, but also include the metadata changes. Turns out that iCloud doesn't update the raw photos. Makes sense, but not helpful for those who do back-ups and expected those changes to be there.
bm3 · a year ago
How would this help with that? What would this let you do that's different than just rsync'ing your iCloud folder from a connected Mac/PC to your NAS?
daft_pink · a year ago
The problem is that I would have to purchase a dedicated desktop with enough storage to hold all my iCloud files and iCloud no longer syncs to external drives, so it’s cost prohibitive to purchase a desktop Mac expressly for that purpose.
paranoidrobot · a year ago
> that's different than just rsync'ing your iCloud folder from a connected Mac/PC

My guess: Being able to run it on a non-Mac/Windows machine.

prmoustache · a year ago
Is the redistribution of MacOS images allowed by the license or is this project distributing illegal copies in plain sight on docker hub?
yao420 · a year ago
Idk but Correlium virtualizes iOS instances and was sued by Apple before settling the case.
judge2020 · a year ago
Correlium was a big commercial player and made headlines. Anyone using this privately (and especially non-commercially) probably isn't at risk of action from Apple, although I wouldn't be surprised if Apple eventually tries to go after publicly hosted images.

Deleted Comment

meatjuice · a year ago
This is clearly illegal
diggan · a year ago
"Illegal" might be a bit strong, "Against the EULA" a bit more realistic, which may or may not be illegal, depending on the context and involved country.