Then I configured the tcp server, rtl_tcp to forward the packets to my workstation. This allowed me to put the Pi in a location more suited to receive transmissions, i.e. not next to RF emitting servers and power supplies. Then, using Gqrx with remote server, I analysed the results.
[1] https://www.joechin.com/raspberry-pi-and-sdr-getting-started...
ssl(3): [STILL INCOMPLETE] Manual page documenting the OpenSSL SSL/TLS library.
But there's an easy way to fix it. Browsers should support a hash attribute for <script> tags, so instead of
<script src="https://code.jquery.com/jquery-2.1.1.min.js">
sites could instead say <script src="https://code.jquery.com/jquery-2.1.1.min.js" hash="sha256:874706b2b1311a0719b5267f7d1cf803057e367e94ae1ff7bf78c5450d30f5d4">
This would also significantly reduce the risks from use of http instead of https, and from weaknesses in https itself.http://www.chromestatus.com/features/6183089948590080http://status.modern.ie/subresourceintegrity
http://www.gnu.org/software/bash/manual/html_node/Pipelines....
"If pipefail is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully"
I feel like there's also some missing layer of infrastructure here.
itch.io, like a lot of sites (HN being another), is meant to act as a host of user-generated content, over which the site takes a curatorial but not editorial stance. (I.e. the site has a Terms of Use; and has moderators that take things down / prevent things from being posted according to the Terms of Use; but otherwise is not favoring content according to the platform's own beliefs in the way that e.g. a newspaper would. None of the UGC posted "represents the views" of the platform, and there's no UGC that the platform would be particularly sad to see taken down.)
I feel like, for such arms-length-hosted UGC platforms, there should be a mechanism to indicate to these "brand protection" services (and phishing/fraud-detection services, etc) that takedown reports should be directed first-and-foremost at the platform itself. A mechanism to assert "this site doesn't have a vested interest in the content it hosts, and so is perfectly willing to comply with takedown requests pointed at specific content; so please don't try to take down the site itself."
There are UGC-hosting websites that brand-protection services already treat this way (e.g. YouTube, Facebook, etc) — but that's just institutional "human common sense" knowledge held about a few specific sites. I feel like this could be generalized, with a rule these takedown systems can follow, where if there's some indication (in a /.well-known/ entry, for example) that the site is a UGC-host and accepts its own platform-level abuse/takedown reports, then that should be attempted first, before trying to get the site itself taken down.
(Of course, such a rule necessarily cannot be a full short-circuit for the regular host-level takedown logic such systems follow; otherwise pirates, fraudsters, etc would just pretend their one-off phishing domains are UGC platforms. But you could have e.g. a default heuristic that if the takedown system discovers a platform-automated-takedown-request channel, then it'll try that channel and give it an hour to take effect before moving onto the host-level strategy; and if it can be detected from e.g. certificate transparency logs that the current ownership of the host is sufficiently long-lived, then additional leeway could be given, upgrading to a 24-72hr wait before host-takedown triggers.)