> “After launch, Parker Solar Probe will detect the position of the Sun, align the thermal protection shield to face it and continue its journey for the next three months, embracing the heat of the Sun and protecting itself from the cold vacuum of space.”
What a phenomenal piece of engineering! The article was not only fascinating to read as a non-astronomer/lay person, but it also makes it all look like child’s play, the way they decided what materials to use and how.
> “And to withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 8 feet (2.4 meters) in diameter and 4.5 inches (about 115 mm) thick.“
So is someone going to be bothering someone else about TPS Reports [1] over the expected seven year span of this probe? Sorry, I couldn’t resist making that reference! :)
> One key to understanding what keeps the spacecraft and its instruments safe, is understanding the concept of heat versus temperature. Counterintuitively, high temperatures do not always translate to actually heating another object.
> In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. Why? Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer. Particles may be moving fast (high temperature), but if there are very few of them, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.
So space has high temperature, but since matter is far apart the temperature isn't transferred very much.
The effective temperature of a vacuum is the temperature of whatever is on the other side, because that determines whether radiated energy is emitted or absorbed. For most of space, the "other side" is the cosmic microwave background, which has a temperature of about 3K. So yes, space is generally pretty frickin' cold.
>The article was not only fascinating to read as a non-astronomer/lay person
yeah, this article was a masterpiece of science writing. all of the difficult concepts were boiled down into very fruitful analogies and metaphors which clarified things succinctly.
Not a rocket scientist, but this article is very well written for an audience with a minimal understanding of Physics and Chemistry. Articles like these help high school students realise that what they're learning now is not really a waste of time.
But really, it's cool that they're using carbon-carbon protection similar to that which was originally developed for the leading edges of the Space Shuttle. And I really want to know how they built foamed carbon for the interior.
I'm guessing that they're using white ceramic paint on top instead of a reflective foil shield (like the Webb uses) because the foil would be shredded by the solar particles.
I'm curious about the foam, too. Normally, foam contains a lot of air. But that kind of foam will blow itself apart in a vacuum. How do they make foam where all the air pockets are replaced with vacuum? Is vacuum-filled foam a better or worse insulator than air-filled foam?
It could be open-cell foam instead of closed-cell foam. In earth atmosphere the cells would still be filled with air, but the air would be able to leave without blowing the foam apart.
Normally, foam contains a lot of air. But that kind of foam will blow itself apart in a vacuum.
In the video, Thermal Protection System Engineer Betsy Congdon says it's 97% "air."
I can't say whether it's actually air, or she's simplifying things for the general public or not.
She also says twice that "water" is used in the radiators. But I'd have to believe that NASA's using something that absorbs/dissipates heat a little more efficiently. Perhaps whatever it is will end up in desktop gaming rig cooling systems eventually.
The part of the answer here that NASA always seems to skip (perhaps because it's not as fun as talking about all the cool heat-resistant technology) is that the vehicle is in an orbit that will only take it into the corona for a short time. While the closest approach is very close, its aphelion is (at closest) about the orbit of Venus. This gives it time to cool down after each corona encounter.
The corona can be expected to be out to ~12 solar radii, which suggests about a day of really severe conditions. (The data pass is 30 hours, which suggests that's about right.) That's why it needs to be a really good head shield.
>This all has to happen without any human intervention, so the central computer software has been programmed and extensively tested to make sure all corrections can be made on the fly.
I'd love to get some deeper insight into how NASA writes and tests software, I can only guess it's a million miles from how most of us work. Anyone know of any good talks, articles from engineers there?
Hello! I’m a FSW dev at NASA Langley. As others have said, the talks from the FSW workshop are a great start. If you want to see a well-used framework, check out CFS (https://cfs.gsfc.nasa.gov)
Off topic, but I've always been interested by the way that government agencies almost exclusively choose acronyms for their software. Meanwhile private companies (especially in the last decade or two) almost always choose unrelated, single words.
It initially seems kind of ridiculous to me that everything has an acronym, but I suppose it's no more ridiculous than choosing a name that sounds like a Pokemon. Maybe less so.
There are a lot of really good links, but to be honest 99% of the secret to writing bulletproof code is “write the most simple, boringest program you can”.
Which is not to say that what NASA and its contractors do isn’t cool or that they don’t spent ungodly amounts of time and money on testing and verification, but you also don’t load one line of code more than is absolutely necessary onto a machine that absolutely must work at all times.
It’s an important lesson to learn and a good skill to exercise from time to time, but honestly it’s also something that doesn’t apply to most of our work as software engineers. For most software most people are willing to knock a couple of nines off the reliability of a piece of software in exchange for higher-quality output, lower costs, and more features. If my data analysis pipeline fails one time in ten because an edge case can use all the memory in the world or some unexpected malformed input crashes the thing but yields more useful output than if I kept it simple and hand-verified every possible input, well, that can be a fine trade off. If your machine learning model for when to retract the solar panel occasionally bricks and leaves the panel out to be destroyed, that’s less acceptable.
you also don’t load one line of code more than is absolutely necessary
Coincidentally, I spent the weekend banging around with an old TRS-80 Model 100, and it's been very interesting to see what workarounds and compromises were made to conserve space.
For example, the machine ships with no DOS at all, so if you're working with cassettes or modem only, you don't have that overhead.
If you do add a floppy drive, when you first plug it in, you flip some DIP switches on the drive and it acts like an RS-232 modem, and you can download a BASIC program from the drive into the computer that, when run, generates a machine-language DOS program and loads it out of the way into high memory.
I don't have one of those sewing machine drives, so I went with a third-party DOS, which weighs in at... wait for it... 747 BYTES.† An entire disk controller with command line interface in 2½ tweets.
The part that I find the most intriguing is "corrections can be made on the fly".
I can see how you would ensure reliability through proper requirements specification, a good software development process, separate independent implementations and extensive verification.
However, every time I read a popsci article about space flight software, they talk about this capability to push new code to the spacecraft while it is in flight.
I'm really curious to learn what this looks like in practice (technical details). Do they really have the ability to do an "ad-hoc" upload and execution of arbitrary code on these systems? If so, how are the ad-hoc programs tested and verified?
There is usually a piece of software running on the machine which basically just does this - allows you to command an image upload to the SSD, do a checksum of the file, then install it if all goes well. There is also usually a simpler version of the software on a redundant SSD or partition which the onboard computer will install if it detects that the software that is currently installed is malfunctioning.
My understanding is that some spacecraft launch with beta/alpha equivalent software. Correct me if I'm wrong, but I believe that the rovers do this, with simple software installed first, then more complicated versions installed once they know everything is working.
It's somewhat similar to updating your iphone, but instead you use a huge dish to do the transmission and the bitrate is pretty horrendous.
I'm going to need a definition of "ad-hoc" here; no-one "deploys straight to production" on a spacecraft. Any patches have to be thoroughly tested on simulators and models of the spacecraft on earth before they are transmitted.
In this case, the "corrections on the fly" refer to all of the real-time responses that the software makes without ground involvement. In the case of a solar limb sensor detecting the sun, the probe will abandon its data collection for that near approach, and go into an emergency response that has been made as straightforward and deterministic as possible, to maximize the chances of recovery for all single-fault and some double-fault scenarios.
To answer your question about software upload, the PSP has 3 redundant CPUs (primary, hot spare, backup spare), and each has multiple boot images. To upload software, the team uploads it to an inactive image of the backup spare CPU, promotes it to hot spare for long enough to collect the data it needs, reboots it into the new image, and then rotates it into the primary role, which is a seamless transition unless something goes wrong, and then the new hot spare takes over again within a second. Once they're sure the software is working, they can update the other CPUs. Before any of this, new software is tested on identical hardware set up on the ground with physics simulations.
From previous articles, remote updates seem to be a core part of spacecraft software/operating systems. I even recall one situation where a spacecraft had a REPL built in that was used to fix a problem (slowly) remotely! They also have multiple levels of operation and watchdog functionality. I have no direct experience with that beyond following news about spacecraft.
This redundant software and hardware setup typically isn't necessary when humans aren't involved. The space shuttle system is similar to what you will find on a Boeing or Airbus aircraft. Redundant software, written by different people in different countries with completely different cultures in different languages (on purpose), running on multiple machines with different hardware and voting on the decisions to be made.
It is complete overkill when "all" you're going to lose is a robot and some pride, as with a space probe you want to have lots of features and this level of safety is very restrictive on development effort.
More than likely, the spacecraft in question is written in C or C++ with the help of RTEMS or VxWorks. It is probably running a radiation hardened, very slow processor.
also, I would imagine that there would be a strong bias towards reuse...which leads you to long term standardization of not just language but also CPU architecture.
Hello! FSW dev from NASA Langley here. We do try to do reuse as much as possible, but small satellites (CubeSats) are starting to change that. There are so many new pieces of hardware and so much experimentation going on to see what’s feasible in space. There are new RTOS frameworks being developed both by commercial and government (CFS, F-prime). If you’re interested in this in particular there is a conference called SmallSat which hosts the talks from previous years. https://smallsat.org
> If Earth was at one end of a yard-stick and the Sun on the other, Parker Solar Probe will make it to within four inches of the solar surface.
91cm and 10cm, to save anyone else doing the conversion. Also, it seems to understate the closeness: Closest approach is 6.1 million km, which is 1/24th of 1 astronomical unit, but four inches is 1/9th of a yard-stick.
>Another challenge came in the form of the electronic wiring — most cables would melt from exposure to heat radiation at such close proximity to the Sun. To solve this problem, the team grew sapphire crystal tubes to suspend the wiring, and made the wires from niobium.
are these wires on the outside of the spacecraft? but what about the silicon of all the electronic stuff that this thing must be keeping? The cooling surface would also get a bit hot (it would always get some more energy at some rate), so how does the coolant transmit any heat away from the probe?
Importantly, they likely have very different thermal conductivity and and specific heats. They probably also need to have reasonably similar thermal expansion coefficients so heating and cooling cycles do not cause them to strain and break.
Not a scientist, but I assume using "radiative heat transfer" ie a hot surface shielded from the sun emitting thermal radiation away from the spacecraft.
As I understand, temperature is a measure of how fast atoms and molecules are vibrating, and not a measure of how much energy per unit area of contact can be transferred in a unit time.
> “After launch, Parker Solar Probe will detect the position of the Sun, align the thermal protection shield to face it and continue its journey for the next three months, embracing the heat of the Sun and protecting itself from the cold vacuum of space.”
What a phenomenal piece of engineering! The article was not only fascinating to read as a non-astronomer/lay person, but it also makes it all look like child’s play, the way they decided what materials to use and how.
> “And to withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 8 feet (2.4 meters) in diameter and 4.5 inches (about 115 mm) thick.“
So is someone going to be bothering someone else about TPS Reports [1] over the expected seven year span of this probe? Sorry, I couldn’t resist making that reference! :)
[1]: https://en.m.wikipedia.org/wiki/TPS_report
> One key to understanding what keeps the spacecraft and its instruments safe, is understanding the concept of heat versus temperature. Counterintuitively, high temperatures do not always translate to actually heating another object.
> In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. Why? Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer. Particles may be moving fast (high temperature), but if there are very few of them, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.
So space has high temperature, but since matter is far apart the temperature isn't transferred very much.
Seems the answer is that you don't need matter for heat radiation.
yeah, this article was a masterpiece of science writing. all of the difficult concepts were boiled down into very fruitful analogies and metaphors which clarified things succinctly.
But really, it's cool that they're using carbon-carbon protection similar to that which was originally developed for the leading edges of the Space Shuttle. And I really want to know how they built foamed carbon for the interior.
I'm guessing that they're using white ceramic paint on top instead of a reflective foil shield (like the Webb uses) because the foil would be shredded by the solar particles.
http://carbonfoam.com/
Deleted Comment
In the video, Thermal Protection System Engineer Betsy Congdon says it's 97% "air."
I can't say whether it's actually air, or she's simplifying things for the general public or not.
She also says twice that "water" is used in the radiators. But I'd have to believe that NASA's using something that absorbs/dissipates heat a little more efficiently. Perhaps whatever it is will end up in desktop gaming rig cooling systems eventually.
https://www.youtube.com/watch?v=cMNQeCWT09A
Heat resistant material will eventually reach equilibrium where the back side is almost as hot as the front side unless it's cooled somehow.
The corona can be expected to be out to ~12 solar radii, which suggests about a day of really severe conditions. (The data pass is 30 hours, which suggests that's about right.) That's why it needs to be a really good head shield.
It's smallest orbit has a period of 88 days.
I'd love to get some deeper insight into how NASA writes and tests software, I can only guess it's a million miles from how most of us work. Anyone know of any good talks, articles from engineers there?
The PDF linked to in the discussion is no longer there, but I found it on standards.nasa.gov here: https://standards.nasa.gov/standard/nasa/nasa-gb-871913
There are also some interesting product management related guidelines from NASA, like this from 2014: https://snebulos.mit.edu/projects/reference/NASA-Generic/NPR...
It initially seems kind of ridiculous to me that everything has an acronym, but I suppose it's no more ridiculous than choosing a name that sounds like a Pokemon. Maybe less so.
In any case, thanks for sharing that.
Which is not to say that what NASA and its contractors do isn’t cool or that they don’t spent ungodly amounts of time and money on testing and verification, but you also don’t load one line of code more than is absolutely necessary onto a machine that absolutely must work at all times.
It’s an important lesson to learn and a good skill to exercise from time to time, but honestly it’s also something that doesn’t apply to most of our work as software engineers. For most software most people are willing to knock a couple of nines off the reliability of a piece of software in exchange for higher-quality output, lower costs, and more features. If my data analysis pipeline fails one time in ten because an edge case can use all the memory in the world or some unexpected malformed input crashes the thing but yields more useful output than if I kept it simple and hand-verified every possible input, well, that can be a fine trade off. If your machine learning model for when to retract the solar panel occasionally bricks and leaves the panel out to be destroyed, that’s less acceptable.
Coincidentally, I spent the weekend banging around with an old TRS-80 Model 100, and it's been very interesting to see what workarounds and compromises were made to conserve space.
For example, the machine ships with no DOS at all, so if you're working with cassettes or modem only, you don't have that overhead.
If you do add a floppy drive, when you first plug it in, you flip some DIP switches on the drive and it acts like an RS-232 modem, and you can download a BASIC program from the drive into the computer that, when run, generates a machine-language DOS program and loads it out of the way into high memory.
I don't have one of those sewing machine drives, so I went with a third-party DOS, which weighs in at... wait for it... 747 BYTES.† An entire disk controller with command line interface in 2½ tweets.
† http://bitchin100.com/wiki/index.php?title=TEENY.CO_MANUAL
I can see how you would ensure reliability through proper requirements specification, a good software development process, separate independent implementations and extensive verification.
However, every time I read a popsci article about space flight software, they talk about this capability to push new code to the spacecraft while it is in flight.
I'm really curious to learn what this looks like in practice (technical details). Do they really have the ability to do an "ad-hoc" upload and execution of arbitrary code on these systems? If so, how are the ad-hoc programs tested and verified?
My understanding is that some spacecraft launch with beta/alpha equivalent software. Correct me if I'm wrong, but I believe that the rovers do this, with simple software installed first, then more complicated versions installed once they know everything is working.
It's somewhat similar to updating your iphone, but instead you use a huge dish to do the transmission and the bitrate is pretty horrendous.
I'm going to need a definition of "ad-hoc" here; no-one "deploys straight to production" on a spacecraft. Any patches have to be thoroughly tested on simulators and models of the spacecraft on earth before they are transmitted.
To answer your question about software upload, the PSP has 3 redundant CPUs (primary, hot spare, backup spare), and each has multiple boot images. To upload software, the team uploads it to an inactive image of the backup spare CPU, promotes it to hot spare for long enough to collect the data it needs, reboots it into the new image, and then rotates it into the primary role, which is a seamless transition unless something goes wrong, and then the new hot spare takes over again within a second. Once they're sure the software is working, they can update the other CPUs. Before any of this, new software is tested on identical hardware set up on the ground with physics simulations.
See also, "Solar Probe Plus Flight Software - An Overview" from http://flightsoftware.jhuapl.edu/files/_site/workshops/2015/
It is complete overkill when "all" you're going to lose is a robot and some pride, as with a space probe you want to have lots of features and this level of safety is very restrictive on development effort.
More than likely, the spacecraft in question is written in C or C++ with the help of RTEMS or VxWorks. It is probably running a radiation hardened, very slow processor.
91cm and 10cm, to save anyone else doing the conversion. Also, it seems to understate the closeness: Closest approach is 6.1 million km, which is 1/24th of 1 astronomical unit, but four inches is 1/9th of a yard-stick.
are these wires on the outside of the spacecraft? but what about the silicon of all the electronic stuff that this thing must be keeping? The cooling surface would also get a bit hot (it would always get some more energy at some rate), so how does the coolant transmit any heat away from the probe?
The difference is that the water molecules are more tightly packed, than the air molecules in the oven. In space, they are quite far apart.
..and: please don't! :)
> Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer.