edit:
get current working directory
get username
get hostname
concatenate the last 3 together
obfuscate(/encrypt?) this string
send the result as a http request to 121.42.217.44 (the value of the base64 string)
edit:
get current working directory
get username
get hostname
concatenate the last 3 together
obfuscate(/encrypt?) this string
send the result as a http request to 121.42.217.44 (the value of the base64 string)
one example: https://docs.moodle.org/22/en/Administration_via_command_lin...
This can't possibly be true. Since Mozilla is encrypting the file, they can also decrypt it (and must do so when the recipient downloads it).
Edit: I was wrong, but will leave this comment because the explanation is useful.
https:// send.firefox.com/download/<$file_identifier>/#<$encryption_key>
Data after the # in the url should not be sent to the http server by the client. Encryption/decryption is presumably handled in the users browser by JavaScript.
The statement about not having the ability to access the contents of the files is perhaps somewhat misleading as they do control the JavaScript that either creates the key or will be given access to the key when someone retrieves the file (by reading it off the end of the url).
"Banner The contents of the specified file are sent to the remote user before authentication is allowed. If the argument is ``none'' then no banner is displayed. By default, no banner is displayed."
You share:
1. Files you Pinned (think of as torrent seeding)
2. Files you have in your IPFS cache
a. Cache files are added to when you request IPFS content
b. The garbage collection triggers and removes non-pinned content at regular intervals
3. The default files that are added to a new IPFS repo (unless you removed them or init'ed using the appropriate option to not include them)To answer the GP's question: As long as you don't pin child porn, and you don't look for child porn, there's 0% chance in IPFS-land.
a. Cache files are added to when _something causes a request for_ IPFS content
The distinction being that "something" is not always a direct action from the user.If content on IPFS (ie a web page?) can reference and load content from other addresses (assumption) then could someone end up in the situation where they are "hosting" (from the cache) something they would not expect to be? (until the garbage collection clears it).
If this seems far fetched, A submission to HN the other day seemed to surprise a few people[1] as it made a http request to a adult website to check if they had an active session (but did not display any content).
Having one profile, or even an entire dedicated browser just for Twitter/FB ensures the login is not spilled over into other sites. If you're surfing the web heavily, I would recommend spawning a new private window so cookies, and other artefacts are not bleeding into your session.
It sounds like common sense, but many people have cookies and login information persisting for years at a time in their browsing sessions. The Mozilla Firefox team are planning to introduce a feature which makes compartmented surfing sessions a lot more user-friendly by separating sessions into tabs. Currently, the 'profiles' feature of Firefox is not user friendly and requires a bit of tinkering with the filesystem.
You might also want to consider using a different theme[2] in each profile to help avoid mixing them up if your running multiple instances simultaneously.
My initial use case for this was adding the lets encrypt staging certificate authority to the trusted root certificate authorities in a profile only used for testing.
[1] https://developer.mozilla.org/en-US/docs/Mozilla/Command_Lin... [2] https://addons.mozilla.org/en-US/firefox/themes/
And if someone wants to DDoS you, you're a lot more vulnerable than a major provider like Fastmail.
Personally, I use a hybrid solution: I use Gandi's SMTP servers for outbound and inbound mail, but I run my own IMAP server for unlimited storage under my control.
They do something similar with their webmail service, but the data is encrypted so it can't be read by a third party.
https://www.fastmail.com/about/reportabuse.html (last paragraph)
EDIT: Fastmail is fairly priced (for me) and i like the features they offer but i wish they wouldn't do this (or rather, i wish they would do the same for the SMTP service as they do for the webmail service)
Of course, if the compression/encryption method has some way of checking the integrity of the output (that's of equal strength to the signature) then then signing first would be completely redundant.
EDIT: so in many scenarios, signing first and last would have no advantage. For example if you get to decide what implementation will be used by the sender and the recipient (most package managers?)
Although the article talks about encrypt+sign versus sign+encrypt, the same argument goes for compress+sign versus sign+compress.
Is there a non obvious problem with sign then compress/encrypt then sign again? (overcomplicated or unnecessary?)
https://dn42.net/Home