This approach even allows the manufacturer to correct design flaws after the fact -- and let's face it, there will always be design flaws. For instance, my FW13 originally came with a very weak hinge for the screen. It was perfectly usable for most daily usage and most people probably wouldn't care, but it meant I couldn't hold it up without the screen tilting back. Well, FW corrected this for those customers who really did care by just selling a new hinge for $24, and so $24 + 10 minutes with a screwdriver later, I had a substantially more refined device! (And to clarify -- there was a defective hinge version in the early batches, and those were replaced free of charge. Mine was a slightly later version that, beyond lacking the level of stiffness I preferred, was not defective.)
I'm just not worried about this, LLMs don't ship.
If you really need consistency for the environment - Let them own the machine, and then give them a stable base VM image, and pay for decent virtualization tooling that they run... on their own machine.
I have seen several attempts to move dev environments to a remote host. They invariably suck.
Yes - that means you need to pay for decent hardware for your devs, it's usually cheaper than remote resources (for a lot of reasons).
Yes - that means you need to support running your stack locally. This is a good constraint (and a place where containers are your friend for consistency).
Yes - that means you need data generation tooling to populate a local env. This can be automated relatively well, and it's something you need with a remote env anyways.
---
The only real downside is data control (ie - the company has less control over how a developer manages assets like source code). I'm my experience, the vast majority of companies should worry less about this - your value as a company isn't your source code in 99.5% of cases, it's the team that executes that source code in production.
If you're in the 0.5% of other cases... you know it and you should be in an air-gapped closed room anyways (and I've worked in those too...)
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).