Space Computing

Aaron Amodt
5 min readDec 7, 2020

I’ve heard many times in my life that the computers used to run the Lunar Module during the Apollo missions only had the computing power equivalent to today’s calculators. It’s always been a universal way to measure how far we’ve progressed in computing in such a short amount of time. There’s all kinds of photographs of hand soldered circuit boards, and scientists braiding together cables by hand to link them together. Makes you wonder what kind of software they’re working with nowadays. It’s only recently occurred to me that they probably have some very specific requirements.

Over on the NASA website I found a little museum devoted to computing history of the space agency. According to the site, in order to pilot spacecraft and its associated processes the computers function in a different way than ground based machines. More conventional computers split its processing time into small tasks, and alternating between each task quickly so that you can run many simultaneous operations. It’s called “batch-processing”:

“[A] method is to limit each program to a fraction of a second running time before going on to the next program, running it for a fraction and then going on until the original program gets picked up again. This cyclic, timesliced method permits many users to be connected to the computer or many jobs to run on the computer in such a way that it appears that the machine is processing one at a time. The computer is so fast that no one notices that his or her job is being done in small segments.”

In space the process works differently. In order to safely respond to and control critical aviation equipment, responses need to immediately process input as it is received. Any delay in a response could lead to catastrophic errors in navigation. These “real-time” computing systems MUST respond to an input within a predetermined amount of time (usually within milliseconds).

“The spacecraft would go out of control or at least lose [4] track of where it was if data were only utilized in small bunches. The requirement for real-time processing leads to other requirements for spacecraft computers not normally found on earth-based systems. Software must not “crash” or have an abnormal end. If the software stops, the vehicle ceases to be controllable.”

For spacecraft that has to travel further afield, the demands become even greater than ones that stick close to home. In this article reporting some of the more modern systems NASA and other agencies are using, a flight director explains a new probe that is being sent to orbit the Sun: The new Solar Orbiter will only survive the mission with dynamic heat shield. It can only sufficiently protect the craft if it is constantly focused at the sun. Any deviation from a 6 degree threshold and the orbiter will be destroyed by the immense heat in a very short matter of time.

“We’ve got extremely demanding requirements for this mission,” says Maria Hernek, head of flight software systems section at ESA. “Typically, rebooting the platform such as this takes roughly 40 seconds. Here, we’ve had 50 seconds total to find the issue, have it isolated, have the system operational again, and take recovery action.”

Beyond requiring quick real-time responses to its sensors, spacecraft also need to be able to function autonomously. A lot of calculations and predictions are made years in advance of a space launch and deployment of a space station or probe. So what happens when you send a rover to another planet such as our Mars missions? Well, the time it takes to communicate commands to a rover has a minimum delay of around 4 minutes and a maximum of around 24 minutes. So most of the system operation has to be self-sustainable and be able to be controlled by a limited amount of commands from Earth.

There is also the issue of radiation: without the protection of Earth’s magnetic field, most objects in outer space are constantly bombarded by radiation from cosmic rays. This can effect biological processes as well as computational ones. Radioactive particles can trigger transistor switches inside of computer chips, causing errors and data corruption. Space-faring computer chips have to be made radiation-resistant by using more robust circuitry. For example they contain more transistors than their earth-bound counterparts, and they require more energy to switch, making accidental triggering less frequent. This can make processing operations up to 10 times slower than home computer processors.

When it comes to space missions conducted by actual human astronauts, the computer equipment is a bit more conventional. The International Space Station is a giant laboratory maintain many concurrent experiments, but it’s only operated by a minuscule crew of people. Many of the life support and and other spacestation tasks are entrusted to the custom hardware and software developed over the decades by NASA. For other tasks, like storing manuals and running software to monitor experiments, regular laptops are perfectly capable machines. The space station keeps 20% more laptops in its inventory than is needed, in the case of hardware failures parts can be swapped.

--

--

Aaron Amodt

I take things apart, and then I put them back together again