Satellites are full of exploitable vulnerabilities. Attackers could use these flaws to launch themselves into orbit, closer to more valuable targets, a satellite security researcher believes.
While it takes a rocket scientist to send a satellite into orbit, apparently, it doesn’t take one to hack it. Some of the man-made moons whizzing around our planet have security measures poorer than the devices you’re reading this text on.
According to the latest paper by researchers from Ruhr University Bochum and the CISPA Helmholtz Center for Information Security in Saarbrücken, the communications orbiters that much of our modern lives rely on don’t even use basic cryptography and are vulnerable to cyberattacks.
The research team delved into a couple of smallsats and a single medium-sized device. Interestingly, one of the devices is of commercial use and orbits our planet to monitor the Earth. Commercial companies rarely share details about their software.
“Technically, nothing is preventing us from exploiting them [satellite vulnerabilities]. Only the fact that we are researchers and we told operators about our findings. But if, for example, somebody wants to ransom a satellite operator, that is certainly in the realm of possibilities.” Willbold said.
However, researchers managed to access much-guarded details with the help of the European Space Agency (ESA), various universities involved in the construction of satellites, and a commercial enterprise.
One of the people heading the team behind the paper, Johannes Willbold, a PhD student from Bochum, told Cybernews that his team discovered several exploitable bugs in satellites.
He says that malicious hackers could easily hack them using off-the-shelf equipment. We sat down with Willbold to discuss satellite security and why on Earth a hacker would target a satellite.
Discussing the report, your colleagues said that “hardly any modern security concepts were implemented.” What do you mean by that?
Modern operating systems, such as Windows or macOS, have a bunch of defenses to protect against the vulnerabilities we found there. Typically, everyday devices have defenses to protect against exploiting the memory corruption vulnerabilities that we found.
Moreover, exploiting a vulnerability on a PC you and I are running right now is significantly harder. You’d need a second or third vulnerability to build an exploit chain where an attacker can actually do something.
On the satellites, we didn’t find any of these defenses. Even the defenses from the early 2000s that were implemented in modern operating systems were not present on satellites. And this is what we mean by “modern security measures are missing.”
There’s often no protection of telecommand via encryption or authentication, which runs in the background when we browse any website. This is also one of these measures we take for granted, which is not the case for satellites.
Cubesat deployment. Image by NASA.
Why do you think that satellite security is lagging behind? It’s probably safe to assume that engineers building satellites aren’t ignorant of basic cyber hygiene.
For one reason, everything you do in space is a lot harder. For example, cryptography in outer space is more complicated – if something goes wrong, you can never physically access your asset again. You can’t install a new driver or request a new key if you lose satellite access.
Another thing is radiation and general conditions in outer space that can degrade your memory and also destroy key material. Implementing something as simple as cryptography is a lot harder because you have to implement some recovery measures.
Ultimately, satellite creators must decide when to remove cryptography to recover their assets. They must figure out what you do when a radiation event destroys a key. But generally speaking, safety is a lot harder to do in space.
The security measures that are in place seem to have worked since we don’t know about any public incident where a lack of satellite security was exploited. There might be some cases under secrecy, but we don’t know about them.
What’s there to gain from intercepting a satellite or hacking it? For example, experts I’ve talked to in the past mentioned the Kessler effect several times.
We also talk about satellites crashing into other satellites a little bit. But I think the main problem with taking over a satellite is that attackers could gain access to the orbit.
Launching a satellite is still very expensive and takes a lot of work and time. A hacked satellite might not be the primary target – attackers could use a craft to get closer to their real target. Suddenly, a threat actor pushes a lot closer to the target satellite.
With the right proximity, they can start intercepting communications like telecommands, which is difficult without direct access to the ground station of the satellite provider.
“With the right proximity, they can start intercepting communications like telecommands, which is difficult without direct access to the ground station of the satellite provider.” Willbold explained.
Attackers could also try to carry out denial-of-service attacks on satellites. And, ultimately, there’s also the one you mentioned, manufacturing Kessler syndrome – if you start crashing one satellite into another satellite, causing a chain reaction to deny space for everyone.
How likely is this? I don’t know. This is not my area of expertise, but certainly, something people should keep in mind. The real problem is that attackers can gain access to the orbital plane and access command assets in space.
The asset itself might not be the actual target. It might be a different target that cannot be hacked directly. But attackers could get the information if getting closer. This is an interesting attack vector.
Do you think financially motivated attackers are capable of hacking satellites? For example, a ransomware gang.
Yes, absolutely. This is one of the main points of the papers. We call it the myth of inaccessibility. For a long time, people thought that only nation-states could afford the communications equipment to talk to satellites.
We have seen this in other domains, such as the mobile or cell phone communities. It was long believed that base stations were too expensive for attackers, meaning that only nation-states would have them. Because of that, people thought an attacker would not have a base station.
Now, we see a retelling of a similar story in satellite security. Nowadays, you can buy a full new ground station for low Earth orbits for $10,000. While it’s not cheap, it’s certainly in the range of motivated hobbyists.
Take us, for example. We’re a small group of people who found exploitable vulnerabilities. Technically, nothing is preventing us from exploiting them. Only the fact that we are researchers and we told operators about our findings. But if, for example, somebody wants to ransom a satellite operator, that is certainly in the realm of possibilities.
Your paper gives the impression that you’re not fans of the “security through obscurity” concept. Why do you think it’s not the best way to think about security?
The argument against security by obscurity is that when you only rely on it, your security is gone as soon as somebody finds out how it works. And in these cases, it’s always a question of when, not if.
Satellites are getting cheaper, many more people are involved in making them, they have commercial off-the-shelf components, and people switch between teams. There’s a lot of knowledge out there. The obscurity people think of is slowly fading because it’s getting easier to get into the topic.
Of course, if we are talking about military satellites, assuming that they implement proper protection against telecommand access and other security measures, then having security by obscurity on top is certainly an option.
On the other hand, it prevents researchers from making insights. If you were open about your system and showed researchers what the system looks like, people would come to show how things could be done better.