There is an economic asymmetry between having a frontier model that people pay to use vs. being someone paying them so they can keep improving it.
Also, from the outside, we only know about the advances that get shipped/put on servers. Presumably, a lot more promising advances are uncovered than are shipped. Maybe they don't fit the product, maybe they are not ready, or maybe they provide a competitive advantage if used and improved internally without disclosure.
So there is a potential growing development/information/frontier asymmetry, of unknown magnitude and velocity.
I’ve never seen a tool more accessible for people of all backgrounds and abilities. It should be celebrated. And yet “engineers” are worried about their identities.
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).