Just to be clear, as with LAION, the data set doesn't contain personal data.
It contains links to personal data.
The title is like saying that sending a magnet link to a copyrighted torrent file is distributing copyright material. Folks can argue if that's true but the discussion should at least be transparent.
I think the data set is generally considered to consist of the images, not the list of links for downloading the images.
That the data set aggregator doesn't directly host the images themselves matters when you want to issue a takedown (targeting the original image host might be more effective) but for the question "Does that mean a model was trained on my images?" it's immaterial.
It does matter? When implemented as a reference, the image can be taken down and will no longer be included in training sets*. As a copy, the image is eternal. What’s the alternative?
* Assuming the users regularly check the images are still being hosted (probably something that should be regulated)
The data set is a list of ("descriptive text", URL) tuples.
As with almost any URL, it is not in and of itself an image.
As an aside, this presents a problem for researchers because the links can resolve to different resources, or no resource at all, depending on when they are accessed.
Therefore this is not a static dataset on which a machine learning model can be trained in a guaranteed reproducible fashion.
That's a distinction without a difference. Just as with LAION, anyone using this data set is going to be downloading the images and training on them, and the potential harms to the affected users are the same.
LAION was alleged to link to CSAM. If LAION didn't link and instead hosted/contained/distributed the actual files, I think there would be a much higher chance that someone distributing LAION could serve prison time, at least in the USA.
When the model is trained, are the links not resolved to fetch whatever the point to, and that goes into the model?
Secondly, privacy and copyright are different. Privacy is more of a concern with how information is used than getting credit and monetization for being the author.
"Ladies and gentlemen of the jury, my client did not rob that bank. He only made a Google Maps link to directions to the bank, a link to an Imgur image containing the vault's combination, and a link to a Pastebin with instructions on how to disable the security system available. He merely packaged that information together and made it publicly available in a single source in a format only really useful to robbers for the purpose of robbery training. It's twoo hward to actually look at the information one is compiling and releasing to the public and to expect even a microscopically minuscule cursory amount of minimal effort to that end is unreasonable. He is clearly innocent."
I hope future functionality of haveibeenpwned includes a tool to search LLM models and training data for PII based on the collected and hashed results of this sort of research.
Yep, that's why at the end of my sentence I referred to the results of research efforts like this that do the hard work of extracting the information in the first place.
This is all public data. People should not be putting personal data on public image hosts and sites like LinkedIn if they did not want them to be scraped. There is nothing private about the internet and I wish people understood that.
> There is nothing private about the internet and I wish people understood that.
I don’t know that that is useful advice for the average person. For instance, you can access your bank account via the internet, yet there are very strong privacy guarantees.
Concur that it is a safe default assumption what you say, but then you need a way for people to not now mistrust all internet services because everything is considered public.
It's important to know that generally this distinction is not relevant when it comes to data subject rights like GDPR's right to erasure: If your company is processing any kind of personal data, including publicly available data, it must comply with data protection regulations.
While I agree with your sentiment, there's a pretty good chance that at least some of this is, for example, data that inadvertently leaked while someone accidentally exposed an automatic index with Apache, or perhaps an asset manifest exposed a bunch of uploaded images in a folder or bucket that wasn't marked private for whatever reason. I can think of a lot of reasons this data could be "public" that would be well beyond the control of the person exposed. I also don't think that there's a universal enough understanding that uploading something to your WordPress or whatever personal/business site to share with a specific person, with an obscure unpublished URL is actually public. I think these lines are pretty blurry.
Edit: to clarify, in the first two examples I'm referring to web applications that the exposed person uses but does not control.
What's important is that we blame the victims instead of the corporations that are abusing people's trust. The victims should have known better than to trust corporations
If this was 2010 I would agree. This is the world we live in. If you post a picture of yourself on a lamp post on a street in a busy city, you can't be surprised if someone takes it. It's the same on the internet and everyone knows it by now.
I have negative sympathy for people who still aren't aware that if they aren't paying for something, they are the something to be sold. This has been the case for almost 30 years now with the majority of services on the internet, including this very website right here.
Modern companies: We aim to create or use human-like AI.
Those same modern companies: Look, if our users inadvertently upload sensitive or private information then we can't really help them. The heuristics for detecting those kinds of things are just too difficult to implement.
> The victims should have known better than to trust corporations
Literally yes? Is this sarcasm? Are we in 2025 supposed to implicitly trust multi-billion dollar multi-national corporations that have decades' worth of abuses to look back on? As if we couldn't have seen this coming?
It's been part of every social media platform's ToS for many years that they get a license to do whatever they want with what you upload. People have warned others about this for years and nothing happened. Those platforms' have already used that data prior to this for image classification, identification and the like. But nothing happened. What's different now?
>People should not be putting personal data on public image hosts and sites like LinkedIn if they did not want them to be scraped.
So my choice in society is to not have a job or get interviews and accept that I have no privacy in the modern world, being mined for profit to companies that lay off their workers anyway.
By the way, I was also recommended to make and show off a website portfolio to get interviews... sigh.
But that is information you intend to be public, you want it in google, and in ai models as they are replacing traditional search engines. The only reason you put it on LinkedIn is for other people to find you, so be happy the llm helps.
That is indeed what Justin.tv did, to much success. But that was because Justin had consented to doing so, just as anything anyone posts online is also consented to being seen by anyone.
Your analogy doesn't hold. A 'hidden camera' would be either malware that does data exfiltration, or the company selling/training on your data outside of the bounds of its terms of service.
A more apt analogy would be someone recording you in public, or an outside camera pointed at your wide-open bedroom window.
Does this analogy really apply? Maybe I'm misunderstanding, but it seems like all of this data was publicly available already, and scraped from the web.
In that case, its not a 'hidden camera'...users uploaded this data and made it public, right? I'm sure some were due to misconfiguration or whatever (like we see with Tea), but it seems like most of this was uploaded by the user to the clear web. I'm all for "Dont blame the victims", but if you upload your CC to Imgur I think you deserve to have to get a new card.
Per the article "CommonPool ... draws on the same data source: web scraping done by the nonprofit Common Crawl between 2014 and 2022."
GDPR has plenty of language related to reasonability, cost, feasibility, and technical state of the art that probably means LLM providers do not have to comply in the same way, say, a social platform might.
There is currently no effective method for unlearning information - specially not when you don't have access to the original training datasets (as is the case with open weight models), see:
Rethinking Machine Unlearning for Large Language Models
Only if it contains personal data you collected without explicit consent ("explicit" here means litrrally asking: "I want to use this data for that purpose, do you allow this? Y/N").
Also people who have given their consent before need to be able to revoke it at any point.
I WISH this mattered. I wish data breaches actually carried consequences. I wish people cared about this. But people don't care. Right up until you're targeted for ID theft, fraud or whatever else. But by then the causality feels so diluted that it's "just one of those things" that happens randomly to good people, and there's "nothing you can do". Horseshit.
We should also stop calling it ID theft. The identity is not stolen, the owner do still have it. Calling it ID theft is moving the responsibility from the one that a fraud is against (often banks or other large entities) to an innocent 3rd party
> Calling it ID theft is moving the responsibility from the one that a fraud is against (often banks or other large entities)
The victim of ID theft is the person whose ID was stolen. The damage to banks or other large entities pales in comparison to the damage to those people.
It’s not clear to me how this is a data breach at all. Did the researchers hack into some database and steal information? No?
Because afaik everything they collected was public web. So now researchers are being lambasted for having data in their sets that others released
That said, masking obvious numbers like SSN is low hanging fruit. Trying to obviate every piece of public information about a person that can identify them is insane.
Criminal liability with a minimum 2 years served for executives and fines amounting to 110% of total global revenue to the company that allowed the breach would see cybersecurity taken a lot more seriously in a hurry
A stolen identity destroys the life of the victim, and there's going to be more than one. They (every single involved CEO) should have all of their assets seized, to be put in a fund that is used to provide free legal support to the victims. Then they should go to a low-security prison and have mandatory community service for the rest of their lives.
They probably can't be redeemed and we should recognise that, but that doesn't mean they can't spend the rest of their life being forced to be useful to society in a constructive way. Any sort of future offense (violence, theft, assault, anything really) should mean we give up on them. Then they should be humanely put down.
It contains links to personal data.
The title is like saying that sending a magnet link to a copyrighted torrent file is distributing copyright material. Folks can argue if that's true but the discussion should at least be transparent.
That the data set aggregator doesn't directly host the images themselves matters when you want to issue a takedown (targeting the original image host might be more effective) but for the question "Does that mean a model was trained on my images?" it's immaterial.
* Assuming the users regularly check the images are still being hosted (probably something that should be regulated)
As with almost any URL, it is not in and of itself an image.
As an aside, this presents a problem for researchers because the links can resolve to different resources, or no resource at all, depending on when they are accessed.
Therefore this is not a static dataset on which a machine learning model can be trained in a guaranteed reproducible fashion.
That seems like a pretty big difference to me.
Secondly, privacy and copyright are different. Privacy is more of a concern with how information is used than getting credit and monetization for being the author.
“It’s not his actual money, it’s just his bank account and routing number.”
Deleted Comment
I interpret that the article is about AI being trained on personal data. That is a big break of many countries legislation.
And AI is 100% being trained in copyrighted data too. Breaking another different set of laws.
That shows how much big-tech is just breaking the law and using money and influence to get away with it.
It wouldn’t be bank robbery.
I don’t know that that is useful advice for the average person. For instance, you can access your bank account via the internet, yet there are very strong privacy guarantees.
Concur that it is a safe default assumption what you say, but then you need a way for people to not now mistrust all internet services because everything is considered public.
It's important to know that generally this distinction is not relevant when it comes to data subject rights like GDPR's right to erasure: If your company is processing any kind of personal data, including publicly available data, it must comply with data protection regulations.
Edit: to clarify, in the first two examples I'm referring to web applications that the exposed person uses but does not control.
We need to better educate people on the risks of posting private information online.
But that does not absolve these corporations of criticism of how they are handling data and "protecting" people's privacy.
Especially not when those companies are using dark patterns to convince people to share more and more information with them.
Those same modern companies: Look, if our users inadvertently upload sensitive or private information then we can't really help them. The heuristics for detecting those kinds of things are just too difficult to implement.
Literally yes? Is this sarcasm? Are we in 2025 supposed to implicitly trust multi-billion dollar multi-national corporations that have decades' worth of abuses to look back on? As if we couldn't have seen this coming?
It's been part of every social media platform's ToS for many years that they get a license to do whatever they want with what you upload. People have warned others about this for years and nothing happened. Those platforms' have already used that data prior to this for image classification, identification and the like. But nothing happened. What's different now?
If you post something publicly you cant be complaining that it is public.
So my choice in society is to not have a job or get interviews and accept that I have no privacy in the modern world, being mined for profit to companies that lay off their workers anyway.
By the way, I was also recommended to make and show off a website portfolio to get interviews... sigh.
A more apt analogy would be someone recording you in public, or an outside camera pointed at your wide-open bedroom window.
In that case, its not a 'hidden camera'...users uploaded this data and made it public, right? I'm sure some were due to misconfiguration or whatever (like we see with Tea), but it seems like most of this was uploaded by the user to the clear web. I'm all for "Dont blame the victims", but if you upload your CC to Imgur I think you deserve to have to get a new card.
Per the article "CommonPool ... draws on the same data source: web scraping done by the nonprofit Common Crawl between 2014 and 2022."
Dead Comment
Of course privacy law doesn't necessarily agree with the idea that you can just scrape private data, but good luck getting that enforced anywhere.
One alternative to archive.is for this website is to disable Javascript and CSS
Another alternative is the website's RSS feed
Works anywhere without CSS or Javascript, without CAPTCHAs, without tracking pixel
For example,
To retrieve only the entry about DataComp CommonPool,https://news.ycombinator.com/item?id=44716006
Unfortunately they don't provide information regarding their training sets (https://help.mistral.ai/en/articles/347390-does-mistral-ai-c...) but I think it's safe to assume it includes DataComp CommonPool.
China must be laughing.
but its that a breach of GDPR???
Rethinking Machine Unlearning for Large Language Models
https://arxiv.org/html/2402.08787v6
Also people who have given their consent before need to be able to revoke it at any point.
The victim of ID theft is the person whose ID was stolen. The damage to banks or other large entities pales in comparison to the damage to those people.
Because afaik everything they collected was public web. So now researchers are being lambasted for having data in their sets that others released
That said, masking obvious numbers like SSN is low hanging fruit. Trying to obviate every piece of public information about a person that can identify them is insane.
Deleted Comment
They probably can't be redeemed and we should recognise that, but that doesn't mean they can't spend the rest of their life being forced to be useful to society in a constructive way. Any sort of future offense (violence, theft, assault, anything really) should mean we give up on them. Then they should be humanely put down.
Dead Comment