- Sharing encryption key for all team members. You need to be able to remove/add people with access. Only way is to rotate the key and only let the current set of people know about the new one.
- Version control is pointless, you just see that the vault changed, no hint as to what was actually updated in the vault.
- Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.
> Sharing encryption key for all team members
you're enrolling a particular users public/key and encrypting a symmetric key using their public key, not generating a single encryption key which you distribute. You can roll the underlying encryption key at any time and git-crypt will work transparently for all users since they get the new symmetric key when they pull (encrypted with their asymmetric key).
> Version control is pointless
git-crypt solves this for local diff operations. for anything web-based like git{hub,lab,tea,coffee} it still sucks.
> - Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.
With git-crypt, if you have gitattributes set correctly (to include a file) and git-crypt is not working correctly or can't encrypt things, it will fail to commit so no risk there.
You can, of course, put secrets in files which you don't chose to encrypt. That is, I suppose, a risk of any solution regardless of in-repo vs out-of-repo encryption.
They uploaded the full "widely-used" training dataset, which happened to include CSAM (child sexual abuse material).
While the title of the article is not great, your wording here implies that they purposefully uploaded some independent CSAM pictures, which is not accurate.
There is important additional context around it, of course, which mitigates (should remove) any criminal legal implications, and should also result in google unsuspending his account in a reasonable timeframe but what happened is also reasonable. Google does automated scans of all data uploaded to drive and caught CP images being uploaded (presumably via hashes from something like NCMEC?) and banned the user. Totally reasonable thing. Google should have an appeal process where a reasonable human can look at it and say "oh shit the guy just uploaded 100m AI training images and 7 of them were CP, he's not a pedo, unban him, ask him not to do it again and report this to someone."
The headline frames it like the story was "A developer found CP in AI training data from google and banned him in retaliation for reporting it." Totally disingenuous framing of the situation.