I'm not sure how the progress of institutional and amateur observations compare. Obviously the big guys benefit from the same technological advancement, but I don't know whether the fraction of new objects discovered by amateurs has been growing or not. I suspect the odds of the first interstellar object being found by an amateur were still pretty long.
By my back of the envelope math, burning 600000 kcal should produce couple hundred kg of CO2. You could also make that crossing in less than a third of the time under sail, with about a third of the daily calorie consumption, for maybe a tenth of the CO2 output.
Oh how the mighty have fallen. I've only worked on one major project with a Microchip MCU (PIC32MK), but their documentation and support were terrible. No detailed documentation, just a driver library with vague, sketchy API docs and disgustingly bug-ridden code. Deadlocking race conditions in the CAN driver, overflow-unsafe comparisons in timers, just intern-level dumbassery that you couldn't fix without reverse engineering the undocumented hardware. Oh, and of course what documentation did exist was split into dozens of separate PDFs, individually served, many of which were 404 unless you went hunting for older versions or other chips in the product line. It certainly cured me of any desire to touch another Microchip product.
Dead Comment
But when I did go past the required courses and into math for math majors, things got a lot better. I just didn't find that out until I was about to graduate.
My third way is that I learn math by learning to "talk" in the concepts, which is I think much more common in physics than pure mathematics (and I gravitated to physics because I loved math but can't stand learning it the way math classes wanted me to). For example, thinking of functions as vectors went kinda like this:
* first I learned about vectors in physics and multivariable calculus, where they were arrows in space
* at some point in a differential equations class (while calculating inner products of orthogonal hermite polynomials, iirc) I realized that integrals were like giant dot products of infinite-dimensional vectors, and I was annoyed that nobody had just told me that because I would have gotten it instantly.
* then I had to repair my understanding of the word "vector" (and grumble about the people who had overloaded it). I began to think of vectors as the N=3 case and functions as the N=infinity case of the same concept. Around this time I also learned quantum mechanics where thinking about a list of binary values as a vector ( |000> + |001> + |010> + etc, for example) was common, which made this easier. It also helped that in mechanics we created larger vectors out of tuples of smaller ones: spatial vector always has N=3 dimensions, a pair of spatial vectors is a single 2N = 6-dimensional vector (albeit with different properties under transformations), and that is much easier to think about than a single vector in R^6. It was also easy to compare it to programming, where there was little difference between an array with 3 elements, an array with 100 elements, and a function that computed a value on every positive integer on request.
* once this is the case, the Fourier transform, Laplace transform, etc are trivial consequences of the model. Give me a basis of orthogonal functions and of course I'll write a function in that basis, no problem, no proofs necessary. I'm vaguely aware there are analytic limitations on when it works but they seem like failures of the formalism, not failures of the technique (as evidenced by how most of them fall away when you switch to doing everything on distributions).
* eventually I learned some differential geometry and Lie theory and learned that addition is actually a pretty weird concept; in most geometries you can't "add" vectors that are far apart; only things that are locally linear can be added. So I had to repair my intuition again: a vector is a local linearization of something that might be macroscopically, and the linearity is what makes it possible to add and scalar-multiply it. And also that there is functionally no difference between composing vectors with addition or multiplication, they're just notations.
At no point in this were the axioms of vector spaces (or normed vector spaces, Banach spaces, etc) useful at all for understanding. I still find them completely unhelpful and would love to read books on higher mathematics that omit all of the axiomatizations in favor of intuition. Unfortunately the more advanced the mathematics, the more formalized the texts on it get, which makes me very sad. It seems very clear that there are two (or more) distinct ways of thinking that are at odds here; the mathematical tradition heavily favors one (especially since Bourbaki, in my impression) and physics is where everyone who can't stand it ends up.
Right?! In my path through the physics curriculum, this whole area was presented in one of two ways. It went straight from "You don't need to worry about the details of this yet, so we'll just present a few conclusions that you will take on faith for now" to "You've already deeply and thoroughly learned the details of this, so we trust that you can trivially extend it to new problems." More time in the math department would have been awfully useful, but somehow that was never suggested by the prerequisites or advisors.
My first guess was that the beam of Cf252-emitted neutrons, when it hits the U235, triggers new neutrons moving in the same direction, rather than in random directions. This would ensure that any tertiary neutrons would join the crowd and help the amplification while not just heating the system up.
Or, maybe that's the point? It's a not-quite-critical collection of U235 that is pushed even closer to criticality by the Cf252, multiplying the Cf232's neutron flux by "up to 30 times". But, if the U235 neutrons trigger the same emissions as the Cf252 neutrons, then wouldn't that require a razor's edge of criticality?
https://en.wikipedia.org/wiki/Californium_neutron_flux_multi...
Yup. From the device description in its decommissioning plan:
Keff is the fission neutron multiplication ratio; 1 is criticality.