Complete loss of control of the entire Starlink constellation (or any megaconstellation) for days at a time would be an intense event. Any environmental cause (a solar event) would be catastrophic ground-side as well. Starlink satellites will decay and re-enter pretty quickly if they lose attitude control, so it's a bit of a race between collisions and drag. Starlink solar arrays are quite large drag surfaces and the orbital decay probably makes collisions less likely. I would not be surprised if satellites are designed to deorbit without ground contact for some period of time. I'm sure SpaceX has done some interesting math on this and it would be interesting to see.
Collision avoidance warnings are public (with an account): https://www.space-track.org/ But importantly they are intended to be actionable, conservative warnings a few days to a week out. They overstate the probability based on assumptions like this paper (estimates at cross-sectional area, uncertainty in orbital knowledge from ground radar, ignorance of attitude control or for future maneuvers). Operators like SpaceX will take these and use their own high-fidelity knowledge (from onboard GPS) to get a less conservative, more realistic probability assessment. These probabilities invariably decrease over time as the uncertainty gets lower. Starlink satellites are constantly under thrust to stay in a low orbit with a big draggy solar array, so a "collision avoidance manuever" to them is really just a slight change to the thrust profile.
Interesting stuff in the paper, but I'm annoyed at the title. I hate when people fear-bait about Kessler syndrome against some of the more responsible actors.
I’ve got some cool ideas for atmospheric Reentry but I’d imagine there are all kinds of permits needed?
Can you provide documentation demonstrating this requirement in the United States? It is widely understood that no such requirement exists.
There's no need to compromise with any requirement, this was entirely voluntary on Apple's part. That's why people were upset.
> I can't believe how uninformed
Oh the irony.
CSAM scanning takes place on the cloud with all the major players. It only has hashes for the worst of the worst stuff out there.
What Apple (and others do) is allow the file to be scanned unencrypted on the server.
What the feature Apple wanted to add was scan the files on the device and flag anything that gets a match.
That file in question would be able to be decrypted on the server and checked by a human. For everything else it was encrypted in a way it cannot be looked at.
If you had icloud disabled it could do nothing.
The intent was to protect data, children and reduce the amount of processing done on the server end to analyse everything.
Everyone lost their mind yet it was clearly laid out in the papers Apple released on it.
What if, in addition to storage, I'd like to use some form of cloud compute on my data? If my device preprocesses/anonymizes my data, and the server involved uses homomorphic encryption so that it also can't read my data, is that not also good enough? It's frustrating to see how much above and beyond Apple has taken this simple service to actually preserve user privacy.
I get that enabling things by default triggers some old wounds. But I can understand the argument that it's okay to enable off-device use of personal data IF it's completely anonymous and privacy preserving. That actually seems very reasonable. None of the other mega-tech companies come close to this standard.
I suppose, it's probably some combination of: CI is configured in-band in the repo, PRs are potentially untrusted, CI uses the latest state of config on a potentially untrusted branch, we still want CI on untrusted branches, CI needs to run arbitrary code, CI has access to secrets and privileged operations.
Maybe it's too many degrees-of-freedom creating too much surface area. Maybe we could get by with a much more limited subset, at least by default.
I've been doing CI stuff in my last two day jobs. In contrast, we worked only on private repos with private collaborators, and we explicitly designated CI as trusted.
I remember early Gitlab runner use when I had a (seemingly) standard build for a docker image. There wasn't any obvious standard way to do that. There were recommendations for dind, just giving shell access, etc. There's so much customization that it's hard to decide what's safe for a protected/main branch vs. user branches.
I don't have a solution. But I think it would be better if, by default, CI engines were a lot less configurable and forced users to adjust their repo and build to match some standard configurations, like:
- Run `make` in a Debian docker image and extract this binary file/.deb after installing some apt packages
- Run docker build . and push the image somewhere
- Run go build in a standard golang container
And really made you dance a little more to do things like "just run this bash script in the repo". Restrict those kinds of builds to protected branches/special setups.
Having the CI config in the same source control tree is dangerous and hard to secure. It would probably be better to have some kind of headless branch like Github pages that is just for CI config.
Y2K was a real problem. The end-of-the-world blackouts + planes falling from the sky was sensationalism, but there were real issues and most of them got fixed. Not trying to take away from this very interesting story of corrupt cronyism, but there were serious people dealing with serious problems out there. "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.
The Snoo is very expensive and easy to pass down or buy used. I think they probably screwed up by selling it outright. You can rent the Snoo, which is probably a better model for everyone. This is kind of a janky way to pull back some of the rental revenue they lost by selling a durable product that people only need for a few months.
It feels gross, I get it. But it's effectively a $100 per child fee which is quite reasonable given the benefits. And there's no realistic way to charge for that other than subscription for the premium (non-safety) stuff. The alternative is to keep developing new models with new features and adding crap people don't need. One thing I love about the original Snoo is that it works fine without an Internet connection or app. I used the app and it was great, but it's nice to know that when you travel or lose power, it can still rock your baby and soothe them. I hope that's still the case if there's a subscription involved.
This was before async/generators were added to JS and callback hell was quite real. I wanted to shape it in the way I’d learned to program in Visual Basic. Very human readable. The result is no longer useful, but it was a fun goal to have the compiler compile itself.