If you were designing the JWST today, you would probably also put onboard a GPU. That could be programmed to do some of the scientific work in space to reduce the amount of data that needs to be downloaded.
This would allow new types of science (for example, far shorter exposure times and stacking to do super resolution and get rid of vibrations in the spacecraft structure). It would also allow redundancy incase the data downlink malfunctions or is degraded - you can still get lots of useful results back over a much smaller engineering link if you have preprocessed the data.
Obviously, if that GPU malfunctions, or there isn't sufficient power or cooling for it due to other failures, data can still be directly downloaded as it is today.
It's hard to say -- I might disagree on whether to do something like that. Most often you want to be able to keep the raw data as long as you can, in anticipation that perhaps some day, some future technique or different calibrations/processing pipeline may improve. Or that you might find (or be looking for something) you didn't expect.
Especially for a scientific instrument whose usage patterns, operating conditions, and discoveries may change over time. (sensors too) Note for an instrument like this, the amount of people/researcher time studying the data afterwards is many times more than the amount of time taking the data. The value of it is incredibly high ($/hour), so you want to keep in as future-usable state as possible.
Once you process something on board for a certain purpose, unless for very low level integrity checks that are almost mandatory, etc, and discard the raw data, you lose the chance to do that in the future.
So unless you are really transmission constrained, I think they would prefer not to do it -- also because of the additional complications involved. I think once you get into "higher functions" becoming an obligation of the telescope's operations, those satellite / defense contractors etc. who have to launch and operate the thing start making requirements that are very difficult to live by.
I don’t know… that’s a lot of power and heat that needs to be dealt with for an onboard GPU. Heat is probably the biggest factor, as it might be enough to affect the image sensors (speculation). Plus, needing a radiation hardened GPU might be an issue. Just for data reprocessing purposes, I’d want to have copies of the rawest data terrestrially.
> If you were designing the JWST today, you would probably also put onboard a GPU. That could be programmed to do some of the scientific work in space
That's just not how science is done in astronomy. People want the raw data to analyze it for decades in different contexts. There's not much that can be done onboard that would make you not want to copy that data back.
The article alludes to laser comms - NASA is developing[0] laser based comms systems (as opposed to radio frequency) which would allow gigabit speed downloading of data. Hardening this technology to send back more raw data is probably a lot more straight forward than trying to do image processing on board.
I'm curious for anyone who may know the answer... with no mention of encryption, are these streams free for anyone with the equipment to receive? Conversely, what kind of security is in place on JWST for command updates to ensure that some rogue group couldn't cause mischief and send it commands?
There is a decent amateur community for receiving satellite transmissions. I'm not super knowledgable on it, but 2 resources that may interest you:
Scott Tilley, who gained a lot of recognition in the past year in analyzing radio signals to see how Russia was using satellites in Ukraine:
https://twitter.com/coastal8049
Is 25 Ghz something an amateur could practically expect to capture from earth without ridiculous (or improbable) electronics? My understanding of higher frequencies was that something this high is likely to have been almost completely absorbed by the atmosphere
I remember talking to someone who works on the Deep Space Network. The commands they send to their devices are definitely encrypted and have checksums so nobody can inject bogus commands. It was super important that the device receives the correct command at the right time. Not sure if their downstream is secured.
This is an exceptionally old document, referencing the 2013 launch expectation, but it contains a bunch of interesting information on the platform database and the communications segment. [1]
Apparently, they have to have accurate ranging to receive from JWST. The interesting portion:
"Ranging is required for JWST, using alternate ground stations in the southern and northern hemisphere. For LEO and L2 missions the accuracy of the ranging is dependent on the tracking of the spacecraft across the sky. For the JWSTs L2 orbit, 21 days of tracking equals about 15 minutes of tracking for a LEO spacecraft."
Reading this article makes me realise that it would take a relatively small let up in funding such projects to permanently lose the knowledge and expertise it takes to build these fantastic machines.
How about just higher and higher r&d spending instead so our choice of tech development is governed by public welfare rather then military utility? We've already had technologies like Nuclear fusion that have developed in inferior directions because they benefit adjacent military technologies.
This is the type of articles we (data junkies) need to see and read! Highly interesting to see the transmission rates, storage capacity and other data related considerations that went through the design of James Webb telescope.
Now, if only there were more funds allocated to antennas/dishes on the DSN (Deep Space Network) [0] to be able to service all ongoing and future space missions, that would be great.
Not antennas, lasers and lenses. For intersatellite links optical is much better, in particular if something is far away like the JWT. Note that NASA is planning to have significant optical links as part of DSN.
The Menzel guy kind of address that at the end, though I wish he went into more detail.
My guess is his attitude is why use less tested technology when the capacity of the Ka band link does the job dependably.
As more probes go up and antenna time grows shorter, increasing link capacity will become more necessary. Until then, why experiment on a 10 billion dollar project?
> In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.
Why would a large antenna make the spacecraft less steady? What's the mechanism behind it?
The antenna is steered to point towards the DSN antenna on the earth. A larger moving mass would make it harder to maintain telescope pointing while the antenna is moving.
In reality, the antenna pointing is 'paused' during each science observation, unless pointing is needed due to the length of the observation.
Oh, the antenna doesn't need to just point in the general direction of Earth, it needs to point to somewhere in the surface. That makes sense, having a narrower beam would save power and achieve higher bitrates
Does this mean it has only a 12-hour window to transmit? Or there's multiple antennas on Earth?
It's because large, directive antennas are still 'dish' style and have to be mechanically pointed at the target (Earth based DSN receiving antennas). That pointing causes vibrations and potentially a shifting center of mass.
Phased arrays allow beam pointing without mechanical movement, but are very expensive for large high gain antennas.
The article didn't go into it I think, but I recall in many satellite missions of this type, there are not only data storage and transmission issues (normal issues you would expect), but also considerations that the antennas and transmission hardware themselves have a certain duty cycle or lifetime that is finite. As in transmission of data consumes that margin.
So you have to quite deliberate in considering how much data to be sending, which data, etc. because every GB eats a chunk of the satellite's expected life. (Again, I believe.)
Really? I've never heard of transmitter or antenna being considered a consumable (and I work in the space industry). Any idea where you got this idea from?
I thought about this some more and remembered that we do do "trending" of just about every subsystem on the spacecraft, and comms is one of them. Pretty sure there's a slide in a presentation every few months looking at how many times the radio has been turned on compared to the number of times it was designed to be turned on, but I assume the component in question is just the relay that switches it on. We have similar plots for everything that can be turned on and off. I think the point of these presentations is just to think about what's likely to die first and to make it obvious if we suddenly change how often we use things.
I will try to find a link, although it's of course quite specialized info that is not often written about.
But for example, I recall that for Spitzer space telescope (I believe) every activation of the transmission hardware consumed it's usable lifetime, or the finite amount of liquid helium coolant that was needed for the operation of the telescope (which only had an expected lifetime of 2.5 years, for the key instruments that relied on coolant).
> All of the communications channels use the Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes.
I find that somewhat hard to believe, LDPC are well established and much more suitable. I would have expected that they would use a DVBS2 standard code.
None the less, I'm also curious about the choice, but couldn't find a lot about it. There has to be some trade-off I guess to using LDPC instead of Reed-Solomon. I only found this paper, but haven't read through it, so no conclusion as of yet:
> Efforts are underway in National Aeronautics and Space Administration (NASA) to upgrade both the S-band (nominal data rate) and the K-band (high data rate) receivers in the Space Network (SN) and the Deep Space Network (DSN) in order to support upcoming missions such as the new Crew Exploration Vehicle (CEV) and the James Webb Space Telescope (JWST). These modernization efforts provide an opportunity to infuse modern forward error correcting (FEC) codes that were not available when the original receivers were built. Low-density parity-check (LDPC) codes are the state-of-the-art in FEC technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate- Repeat-by-4-Jagged-Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and length 1024, 4096, 16384 information bits.1, 2 Performance is less than one dB from capacity for all combinations.
My guess at this point is probably just "We've used Reed-Solomon a bunch, we know it works. We're working on newer techniques, but lets use what we know works"
idk the article also mentions they've been working on it for 20 years I wouldn't be surprised if they just got to a point that was good enough and then didn't want to mess with things
real tragedy is that they didn't use cutting edge web7.0 tech for their front end smh
I think there's a pretty good chance that their data encoding scheme was working, and so they just left it in a working state, without upgrading it to use modern best practices.
Note that this mission was specced and designed ages ago too, so just as the observations it makes are views of the past, the engineering to do so is a time capsule too
Others have mentioned the advantage in terms of burst errors. That is fairly common for radio signals coming from space. Think of an airplane flying through the signal path or something. I know in the NRO all of the radio downlink used BCH for error correction that could correct up to 4 bits per byte, and DPCM for compression, which also does particularly well with long runs of the same pixel value, something pretty common with space imagery (most of what you're looking at is black). BCH allows you to pick exactly how many bits per byte you want to be able to correct, which can be tuned based on known error characteristics of your signal. Part of it may just be these systems have been around a long time and we already have extremely well-tuned and efficient implementations that are known to work quite well with very large volumes of data inbound from space.
This would allow new types of science (for example, far shorter exposure times and stacking to do super resolution and get rid of vibrations in the spacecraft structure). It would also allow redundancy incase the data downlink malfunctions or is degraded - you can still get lots of useful results back over a much smaller engineering link if you have preprocessed the data.
Obviously, if that GPU malfunctions, or there isn't sufficient power or cooling for it due to other failures, data can still be directly downloaded as it is today.
Basicly, it adds a lot of flexibility.
Especially for a scientific instrument whose usage patterns, operating conditions, and discoveries may change over time. (sensors too) Note for an instrument like this, the amount of people/researcher time studying the data afterwards is many times more than the amount of time taking the data. The value of it is incredibly high ($/hour), so you want to keep in as future-usable state as possible.
Once you process something on board for a certain purpose, unless for very low level integrity checks that are almost mandatory, etc, and discard the raw data, you lose the chance to do that in the future.
So unless you are really transmission constrained, I think they would prefer not to do it -- also because of the additional complications involved. I think once you get into "higher functions" becoming an obligation of the telescope's operations, those satellite / defense contractors etc. who have to launch and operate the thing start making requirements that are very difficult to live by.
That's just not how science is done in astronomy. People want the raw data to analyze it for decades in different contexts. There's not much that can be done onboard that would make you not want to copy that data back.
[0] https://www.nasa.gov/mission_pages/tdm/lcrd/index.html
You can't just throw any consumer microprocessor into a machine subject to extreme vibrations, heat, cold, and radiation.
Scott Tilley, who gained a lot of recognition in the past year in analyzing radio signals to see how Russia was using satellites in Ukraine: https://twitter.com/coastal8049
An amateur group revived 2-way communications on an abandoned satellite: https://sservi.nasa.gov/articles/isee-3-reboot-project/#:~:t....
Apparently, they have to have accurate ranging to receive from JWST. The interesting portion:
"Ranging is required for JWST, using alternate ground stations in the southern and northern hemisphere. For LEO and L2 missions the accuracy of the ranging is dependent on the tracking of the spacecraft across the sky. For the JWSTs L2 orbit, 21 days of tracking equals about 15 minutes of tracking for a LEO spacecraft."
https://ntrs.nasa.gov/api/citations/20080030196/downloads/20...
[0] https://eyes.nasa.gov/dsn/dsn.html
My guess is his attitude is why use less tested technology when the capacity of the Ka band link does the job dependably.
As more probes go up and antenna time grows shorter, increasing link capacity will become more necessary. Until then, why experiment on a 10 billion dollar project?
Why would a large antenna make the spacecraft less steady? What's the mechanism behind it?
In reality, the antenna pointing is 'paused' during each science observation, unless pointing is needed due to the length of the observation.
Does this mean it has only a 12-hour window to transmit? Or there's multiple antennas on Earth?
Phased arrays allow beam pointing without mechanical movement, but are very expensive for large high gain antennas.
So you have to quite deliberate in considering how much data to be sending, which data, etc. because every GB eats a chunk of the satellite's expected life. (Again, I believe.)
But for example, I recall that for Spitzer space telescope (I believe) every activation of the transmission hardware consumed it's usable lifetime, or the finite amount of liquid helium coolant that was needed for the operation of the telescope (which only had an expected lifetime of 2.5 years, for the key instruments that relied on coolant).
I find that somewhat hard to believe, LDPC are well established and much more suitable. I would have expected that they would use a DVBS2 standard code.
None the less, I'm also curious about the choice, but couldn't find a lot about it. There has to be some trade-off I guess to using LDPC instead of Reed-Solomon. I only found this paper, but haven't read through it, so no conclusion as of yet:
https://trs.jpl.nasa.gov/bitstream/handle/2014/45387/08-1056...
> Efforts are underway in National Aeronautics and Space Administration (NASA) to upgrade both the S-band (nominal data rate) and the K-band (high data rate) receivers in the Space Network (SN) and the Deep Space Network (DSN) in order to support upcoming missions such as the new Crew Exploration Vehicle (CEV) and the James Webb Space Telescope (JWST). These modernization efforts provide an opportunity to infuse modern forward error correcting (FEC) codes that were not available when the original receivers were built. Low-density parity-check (LDPC) codes are the state-of-the-art in FEC technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate- Repeat-by-4-Jagged-Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and length 1024, 4096, 16384 information bits.1, 2 Performance is less than one dB from capacity for all combinations.
My guess at this point is probably just "We've used Reed-Solomon a bunch, we know it works. We're working on newer techniques, but lets use what we know works"
real tragedy is that they didn't use cutting edge web7.0 tech for their front end smh
For example, the JWST also uses a proprietary version of JavaScript 3, made by a bankrupt company.
https://twitter.com/michael_nielsen/status/15469085323556577...
I think there's a pretty good chance that their data encoding scheme was working, and so they just left it in a working state, without upgrading it to use modern best practices.
Dead Comment
https://en.wikipedia.org/wiki/Low-density_parity-check_code#...
Deleted Comment
Anyone know if ZFS is playing a role?