Readit News logoReadit News
alex1sa commented on Is legal the same as legitimate: AI reimplementation and the erosion of copyleft   writings.hongminhee.org/2... · Posted by u/dahlia
wvenable · 6 days ago
There's a couple of different issues here that all get mangled together. If you're producing effectively the same expression that's infringement. You draw Captain America from memory, it's still Captain America, and therefore infringement. If you draw Captain Canada by tracing around Captain America that's also infringement but of a different type.

When it comes to software, again it's the expression that matters -- literally the actual source code. Software that does the same thing but uses entirely different code to do it is not the same expression. Like with the tracing example above, if you read the original source code then it's harder to claim that it isn't the same expression. This is why clean room implementations are necessary.

alex1sa · 6 days ago
The "clean room" concept gets really blurry with LLMs in practice. I build a SaaS product that uses AI to process unstructured voice input and map it to structured form fields. During development, we looked at how other tools solve similar problems — not their source code, but their public behavior and APIs.

  Now imagine an LLM trained on every GitHub repo doing the same thing at scale. The model has "seen" the source, but the output is statistically generated, not
  copied. Is that a clean room? The model never "read" the code the way a human would, but it clearly learned patterns from it.

  I think the practical answer is that clean room as a legal concept was designed for a world where reimplementation was expensive and intentional. When an LLM
  can do it in minutes from a spec, we need a different framework entirely.

alex1sa commented on Meta’s AI smart glasses and data privacy concerns   svd.se/a/K8nrV4/metas-ai-... · Posted by u/sandbach
chwahoo · 13 days ago
I'll confess that I like my Meta Ray Ban glasses: I love using them to listen to podcasts at the pool/beach, while riding my bike, and it's cool to snap a quick picture of my kids without pulling out my phone.

I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.

My settings are:

- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.

- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.

I'm not sure whether my settings would prevent my media from being used as described in the article.

Also, it's not clear which data is being used for training:

- random photos / videos taken

- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")

As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.

TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).

alex1sa · 7 days ago
The core issue here is that "to provide the service" in privacy policies has become a catch-all that can justify almost anything. I work on web products in the EU and we had to redesign our entire data pipeline for GDPR compliance. The key principle is "data minimization" — you collect only what's strictly necessary and delete it after processing. Meta's approach seems to be the opposite: collect everything, process in the cloud, and use vague language to keep the door open for secondary uses like labeling and training. The fact that turning off "Cloud media" might not actually prevent your data from being sent to Meta's servers for inference is a textbook dark pattern. Users see a toggle and assume they have control. In practice, the toggle only controls one specific processing path while others remain active. Under GDPR, this would likely fail the "informed consent" test — consent must be specific, unambiguous, and freely given. But enforcement is slow and fines are just a cost of doing business at Meta's scale.

u/alex1sa

KarmaCake day1March 9, 2026View Original