The author admits to having zero experience with carrier-level infrastructure, but their suspicions are essentially correct.
I actually have done a fair bit of 4G and 5G specific pentesting and security research for a number of major carriers. While it varies between carriers and between product vendors, it's still an absolute horror show. Until very recently, the security was entirely achieved through obscurity. The 4G and 5G standards have started to address this, but there are still gaps big enough to be deeply concerning. I don't think it's overly hyperbolic to assume that any moderately sophisticated threat actor who wants a beachhead on a carrier can achieve it. I've demonstrated this multiple times, professionally.
IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent. There are valid reasons Western countries have banned them. Western hardware vendors have significantly more mature software, but are still many years behind what most of us would consider modern security best practices.
A few years back the U.K. tried a political experiment in which it purchased Huawei equipment and also set up a special government/Huawei lab where they could analyze the source code to ensure it was safe to use. GCHQ found that the code quality made it unreviewable, and that they could not even ensure that the source code provided actually ran on the equipment (because Huawei had direct update capability.) I believe that equipment has been banned since 2020. https://www.washingtonpost.com/world/national-security/brita...
You're thinking of HCSEC. In the majority of their reports they repeatedly complained that they couldn't even get binary equivalence with Huawei's deployed builds (which clearly obviates the value of any code review) but in the most recent available report from 2021 [1], they do report that Huawei had finally achieved that binary equivalence for a core product set.
The government chose to stop publishing HCSEC reports after 2021. I'm unable to work out whether HCSEC itself is still operating or not.
The media coverage at the time (which I followed closely because I worked in this space) indicated that the UK was under a great deal of pressure from the US to ban Huawei. The US was allegedly concerned that the use of Huawei equipment would allow US/UK shared intelligence to be eavesdropped by the CCP. The US pressure was widely viewed in the UK as having an economic purpose disguised as security.
I've been personally involved in evaluating the security of a certain vendor starting with the letter H. Let us just say they are "less than honest". I had pcaps of their bullshit trying to reach out to random C2 shit on the internet, which garnered a response of "there must be a mistake, that is not our software".
Let China sell their telecom bullshit to all the poor people of the world - they will learn hard lessons.
> IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent.
Thank god we have the hardware and software vendors from a certain north american state, who take security very seroisly. Oh, wait ... /s
One thing I absolutely don't understand about telecom security is how, in 2025, we're still using pre-shared keys in our mobile phone standards.
RSA and Diffie Hellman[1] have existed for decades, so have CA systems, yet SIM cards are still provisioned with a pre-shared key that only the card and the operator knows, and all further authentication and encryption is based on that key[2]. If the operator is ever hacked and the keys are stolen, there's nothing you can do.
To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
[1] I'm aware that those algorithms are not considered best practices any more and that elliptic curves would be a better idea, but better RSA than what we have now.
> "AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden."
From what I remember from early 2000’s only the air interface was encrypted. Since anyway they have to provide lawful intercept capability there was not much benefit in providing end to end encryption. It’s not like it was a top of mind feature for consumers.
In the late 90s/early 2000's, I would hear voice telephone conversations in central offices quite frequently. (Nobody was spying on purpose, or even paying much attention to what was being said. It was incidental to troubleshooting some problem report.)
SMS is encrypted in transit between device and tower, but after that it's plaintext. Same with carrier-spec RCS over HTTP (although HTTPS is available, but then it's plain text at the carrier RCS server anyway).
Encryption between tower and device were particularly weak in 2G and 3G days, but since 4G things have been a lot better. 2G's continuing existence remains a security risk, which is why Google Pixels have a toggle to turn it off, and iOS disables it when you enable lockdown mode.
Do not use SMS or non-E2EE RCS for anything you wouldn't shout at a random telecom engineer or passing police officer.
Cell towers (BTSs and eNodeBs) do indeed have unencrypted access to the data passing through them. They're owned by the operator, this is fine.
An attack on SIM card keys would let anybody listen to traffic going over-the-air or impersonate a cell tower. All you need is the keys and some radio equipment to receive the traffic.
Some of these algorithms have to run on the SIM card, and smart cards (at least in the past) don't support RSA or (non-elliptic-curve) DH without a coprocessor that makes them more expensive.
Also, symmetric algorithms are quantum safe :)
But yes, I also wish that in 2025 we'd at least support ECC, which most smart cards should support natively at this point.
> To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
If you can't trust your SIM card vendor, you're pretty much out of luck. The attack vector for an asymmetric scheme would look a bit different, but if you can't trust the software running on them, how would you know if they were exfiltrating plaintexts through their choice of randomness for all nondeterministic algorithms?
There's a difference between asking / bribing / blackmailing / legally forcing the company to make a copy of some text files (or just figuring out a way to get those files yourself) versus forcing them to modify their software in deliberately insecure ways (which can also be discovered by others and used against you).
The former is a true NOBUS, the second one is not (though you're right that governments would probably treat this as one).
If you have the ability to distribute keys directly, asymmetric cryptography adds complexity without much payoff. Certainly the idea that introducing RSA to a symmetrical system makes it more sound isn't well supported; the opposite is true.
The "NOBUS vulnerability" thing is especially silly, since the root of trust of all these systems are telecom providers. You don't have to ask if your American telecom provider is "in cahoots" with the US intelligence community; they are.
Who said anything about American telecom providers specifically?
> If you have the ability to distribute keys directly, asymmetric cryptography adds complexity without much payoff.
It's hard for me to believe that a symmetric key, touched by at least three systems (SIM card, operator HSS, SIM manufacturer), the former two of which can't be completely airgapped for obvious reasons, with no ability to rekey, is more secure than a key that is physically unable to leave a single device.
With a TLS-like system, you'd have to somehow hack every single SIM card to get anywhere. Hacking the manufacturer wouldn't help unless you could make them flash custom software that would exfiltrate the keys, and the consequences of an operator hack could be contained by revoking an immediate certificate and generating a new one, presumably from the operator's root cert, sitting somewhere safe in a completely airgapped HSM.
I worked for a major telco in technical support/customer service.
I saw numerous security issues, and when I brought them up, with solutions to improve the service for customers, I was informed the the company would lose money.
Scammers are big customers for telcos, and when they get caught, and banned, they come back again and pay a new connection fee and start the cycle again. Scammers also enable feature upselling, another way to profit from not solving the problem.
Are you suggesting end-to-end encryption? Telecom providers have to implement "lawful intercept" interfaces to comply with the law in many jurisdictions.
I think they're just suggesting improvements on device-to-network encryption. Requiring the sim card secret to live on the sim card and the network means it needs to be transmitted from manufacturing to the network, which increases exposure.
If it were a public/private key pair, and you could generate it on the sim card during manufacturing, the private key would never need to be anywhere but the sim card. Maybe that's infeasible because of seeding, but even if the private key was generated on the manufacturing/programming equipment and stored on the sim card, it wouldn't need to be stored or transmitted and could only be intercepted while the equipment was actively compromised.
You appear to be neglecting the need for symmetric stream ciphers to achieve realtime communications (needed for performance reasons). No matter what you do, you are going to have a symmetric key in there somewhere for adversaries to extract. Once the adversary owns the telco, it is over (i.e., calls can be decrypted), no matter how strong the cryptography is. Your strongest cryptography cannot withstand a key leak.
Do you know how TLS works? The asymmetric keys are used to negotiate temporary symmetric keys, which are used for the actual data. That's exactly what the mentioned Diffie-Hellman algorithm does. Also check out "perfect forward secrecy".
Yes, but that "somewhere" could very well be only the two phones involved in a call, with key establishment happening via Diffie-Hellman. Doesn't protect against an active attack, but there's no key to leak inside the network.
Nortel DMS100 switches are still running code written for Motorola 680x0 CPUs. There is not enough CPU power to validate an RSA key in a timely manner for any control message sent over a T1 or ISDN PRI.
> To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
This feels like the obligatory XKCD comic[1] when in reality there isn't any secretive key extraction or cracking...things are just sent unencrypted from deeper into the network to the three-letter-agencies. Telco's are well known to have interconnect rooms with agencies.
To be honest, the conclusion of the blog post that Freeswitch are not budging from their community release schedule does not surpise me one iota.
Freeswitch used to have a strong community spirit.
Things all changed since they took a more agressive commercial turn, a couple of years ago IIRC.
Since that point you now have to jump through the "register" hoop to gain access to stuff that should be open (I can't remember what it is, IIRC something like the APT repos being hidden behind a "register" wall, something like that).
I don't want to "register" with a commercial company to gain access to the foss community content. Because we all know what happens in the tech world if you give your details to a commercial company, the salesdroids start pestering you for an upsell/cross-sell, you get put in mailing lists you never asked to be put on, etc.
In Signalwire’s defence, reading through the old mailing list, I got the feeling they drove the development of Freeswitch for years without being properly compensated by downstream projects. Sadly I’ve also seen other parts of the Voip community recalibrate their generosity when it comes to open source and I honestly can’t blame them.
The team behind Matrix.org talked about a similar problem in one of their FOSDEM’25 talks: commercial vendors free loading on development.
Correct me if I'm wrong, but isn't the patched source code available? It'd be awfully nice of them to update their APT repos for free, but that's nothing more than that: awfully nice. If you rely on their code and don't wish to pay for their support, you could always build the APT packages yourself.
I don't know what kind of community spirit you should expect for a project maintained by a single company like this.
> It'd be awfully nice of them to update their APT repos for free, but that's nothing more than that: awfully nice.
It doesn't cost them anything metophorically or physically.
Updating the APT repo is a CI/CD script away. No doubt they do it for the commercial side anyway, so the tweak in the CI/CD is probably no more than a couple of variables.
And open-source projects can usually get freebies from everything they need. So there's unlikely to be much if anything in terms of actual hard-dollars expense for running the repos.
I think it's fair to assume that between foreign threat actors, the Five Eyes/other Western pacts, and the demand to make the line go up, there's no real anonymity online. If they want you, they've got the means to get you.
In reality that's really no different than the pre-internet age. If you don't want your stuff intercepted, you need to encrypt it by means that aren't trivial to access electronically for a major security apparatus. Physical notes, word-of-mouth, hand signals, etc.
Also, you need to be ready for the consequences of what you say and do online should a state actor decide to allocate the resources to actually act upon the data they have.
From the article I am not totally convinced that "Telecom security sucks today", given they just randomly picked Freeswitch to find a buffer overflow. "Telecom stacks" might or might be not insecure but what's done here is very weak evidence. The Salt Typhoon attacks allegedly exploited a Cisco vulnerability, although the analysts suggest the attackers have been using proper credentials (https://cyberscoop.com/cisco-talos-salt-typhoon-initial-acce...) So nothing to do with Freeswitch or anything.
Cisco Unified Call Manager almost certainly has vulnerabilities, as does Metaswitch which has shambled along in network cores after Microsoft publicly murdered it, Oracle SBC is often wonky just doing the basics, whatever shambling mess Teams is shipping this week for their TRouter implementation definitely has Denial of Service bugs that I can't properly isolate.
Lets not even talk about the mess of MF Tandems or almost every carrier barebacking the web by slinging raw unencrypted UDP SIP traffic over the internet...
It is possible to build secure systems in this space, but instead we have almost every major telecom carrier running proprietary unmodifiable platforms from long dead companies or projects (Nortel, Metaswitch,etc) and piles of technical debt that are generally worse than the horribly dated and unpatched equipment that comprises their networks.
I find it absolutely insane that the industry standard for SIP trunks is unencrypted UDP, usually using IP-based authentication.
When I asked a popular VoIP carrier about this a while back, they argued that unencrypted connections were fine because the PSTN doesn't offer any encryption and they didn't want to give their customers a false sense of security. While technically true, this doesn't mean we shouldn't at least try to implement basic security where we can - especially for traffic sent over the public Internet.
It'd be lovely to see some nations of the world pour some serious money into the various Linux Foundation (or other open source) telco & cellular projects.
I've had a few conversations with [security nerds more familiar with telecom] since SignalWire broke embargo.
The "everything sucks and there's no motive to fix it" was a synopsis because, frankly, those conversations get really hard to follow if you don't know the jargon. And I didn't feel like trying to explain it poorly (since I don't understand the space that well, myself), so I left it at what I wrote.
(I didn't expect Hacker News to notice my blog at all.)
As security nerd working within telecom agreed. Nobody really cares about security issues. And when people already struggle to care about the issues it gets even worse when fixing some of the issues (such as SS7 vulns) requires coordination with telcos around the world. cape[1] at least seems like its a breath of fresh air within the space.
> (I didn't expect Hacker News to notice my blog at all.)
Your blog actually gets posted somewhat regularly [0]. I actually remembered it, because it’s one of the rare cases where I like the "cute" illustrations.
I worked with telecom code. It's code that parses complicated network protocols with several generations of legacy, often written in secrecy (security by obscurity), and often in C/C++.
I don't doubt that this cruft is insecure. It's just a bit of a stretch to get to that conclusion from finding a potential buffer overflow in Freeswitch. Maybe it's not a stretch but just a conclusion by analogy but then you might just say "all software is insecure".
Telecom stacks are full of questionable security decisions, some stemming from the protocol level. Protocols like SS7 were designed without any concept of hackers or malicious actors. You can slap firewalls on top of them, but those interfere with desired functionality and even if they don't they require extra investment. DIAMETER is better but still full of holes.
Security in the telecom space has been a point of priority for the past couple of years and things are improving, but there are decades of legacy hardware, software, firmware, and network designs to cover for.
I doubt anyone is running standard Freeswitch in their telco networks, but the problems Freeswitch has as an aging telecom product with a history of not looking for security as much as one might expect those are all over the place.
This blog article is a combination of "I did a thing I do for a living" plus "recipient of my report does not share my (and the rest of the Security Industry) values", and concludes that Telecom security sucks.
It's very nice that the author spent their free time looking at code, found a bug and reported it -- I don't want to discourage that at all, that's great. But the fact that one maintainer of one piece of software didn't bow and scrape to the author and follow Security Industry Best Practises, is not a strong basis for opining that "Telecom security sucks today" (even if it does)
If someone came to you with a bug in your code, and they didn't claim it was being actively exploited, and they didn't offer a PoC to confirm it could be exploited... why shouldn't you just treat it as a regular bug? Fix it now, and it'll be in the next release. What's that? People can see the changes? Well yes, they can see all the other changes too. Good luck to them finding an exploit, you didn't.
The same thing happens in Linux distros. A security bug gets reported. Sometimes, the upstream author is literally dead, not just intransigent. If you want change on your own timeline, make your own releases.
One area where freeswitch is probably used quite often (and without support contract) are BigBlueButton installations (virtual classroom system) in schools and universities. I am more worried about them then about telcos.
2G GSM piggybacked its wired backed on the ISDN telecom standard (which is which your phone number is called a MSISDN).
Today's CAMEL (MAP and CAP) signalling is an evolution of the ISDN signalling which traces it roots back into (amongst others) the SINAP signalling protocol and the SS7 network stack from even before that.
SS7 is early 1970s stuff. From a more innocent time.
The government chose to stop publishing HCSEC reports after 2021. I'm unable to work out whether HCSEC itself is still operating or not.
[1] https://assets.publishing.service.gov.uk/media/60f6b6be8fa8f...
A quick search found: https://www.euractiv.com/section/politics/short_news/uk-bann...
But I guess it’s like you said: a political experiment.
Let China sell their telecom bullshit to all the poor people of the world - they will learn hard lessons.
Is this a good starting point?
https://vulners.com/search/types/huawei
Deleted Comment
Thank god we have the hardware and software vendors from a certain north american state, who take security very seroisly. Oh, wait ... /s
RSA and Diffie Hellman[1] have existed for decades, so have CA systems, yet SIM cards are still provisioned with a pre-shared key that only the card and the operator knows, and all further authentication and encryption is based on that key[2]. If the operator is ever hacked and the keys are stolen, there's nothing you can do.
To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
[1] I'm aware that those algorithms are not considered best practices any more and that elliptic curves would be a better idea, but better RSA than what we have now.
[2] https://nickvsnetworking.com/hss-usim-authentication-in-lte-...
> "AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden."
https://theintercept.com/2015/02/19/great-sim-heist/
Dunno if that is still the case though. However, cell phones as secure communication did not use to be the case.
You would probably want to communicate with encrypted data traffic device to device.
Encryption between tower and device were particularly weak in 2G and 3G days, but since 4G things have been a lot better. 2G's continuing existence remains a security risk, which is why Google Pixels have a toggle to turn it off, and iOS disables it when you enable lockdown mode.
Do not use SMS or non-E2EE RCS for anything you wouldn't shout at a random telecom engineer or passing police officer.
Cell towers (BTSs and eNodeBs) do indeed have unencrypted access to the data passing through them. They're owned by the operator, this is fine.
An attack on SIM card keys would let anybody listen to traffic going over-the-air or impersonate a cell tower. All you need is the keys and some radio equipment to receive the traffic.
Also, symmetric algorithms are quantum safe :)
But yes, I also wish that in 2025 we'd at least support ECC, which most smart cards should support natively at this point.
> To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
If you can't trust your SIM card vendor, you're pretty much out of luck. The attack vector for an asymmetric scheme would look a bit different, but if you can't trust the software running on them, how would you know if they were exfiltrating plaintexts through their choice of randomness for all nondeterministic algorithms?
The former is a true NOBUS, the second one is not (though you're right that governments would probably treat this as one).
The "NOBUS vulnerability" thing is especially silly, since the root of trust of all these systems are telecom providers. You don't have to ask if your American telecom provider is "in cahoots" with the US intelligence community; they are.
Who said anything about American telecom providers specifically?
> If you have the ability to distribute keys directly, asymmetric cryptography adds complexity without much payoff.
It's hard for me to believe that a symmetric key, touched by at least three systems (SIM card, operator HSS, SIM manufacturer), the former two of which can't be completely airgapped for obvious reasons, with no ability to rekey, is more secure than a key that is physically unable to leave a single device.
With a TLS-like system, you'd have to somehow hack every single SIM card to get anywhere. Hacking the manufacturer wouldn't help unless you could make them flash custom software that would exfiltrate the keys, and the consequences of an operator hack could be contained by revoking an immediate certificate and generating a new one, presumably from the operator's root cert, sitting somewhere safe in a completely airgapped HSM.
I saw numerous security issues, and when I brought them up, with solutions to improve the service for customers, I was informed the the company would lose money.
Scammers are big customers for telcos, and when they get caught, and banned, they come back again and pay a new connection fee and start the cycle again. Scammers also enable feature upselling, another way to profit from not solving the problem.
If it were a public/private key pair, and you could generate it on the sim card during manufacturing, the private key would never need to be anywhere but the sim card. Maybe that's infeasible because of seeding, but even if the private key was generated on the manufacturing/programming equipment and stored on the sim card, it wouldn't need to be stored or transmitted and could only be intercepted while the equipment was actively compromised.
This feels like the obligatory XKCD comic[1] when in reality there isn't any secretive key extraction or cracking...things are just sent unencrypted from deeper into the network to the three-letter-agencies. Telco's are well known to have interconnect rooms with agencies.
[1] https://xkcd.com/538/
Maybe these connections are a requirement for their permits in the first place. Who knows?
Freeswitch used to have a strong community spirit.
Things all changed since they took a more agressive commercial turn, a couple of years ago IIRC.
Since that point you now have to jump through the "register" hoop to gain access to stuff that should be open (I can't remember what it is, IIRC something like the APT repos being hidden behind a "register" wall, something like that).
I don't want to "register" with a commercial company to gain access to the foss community content. Because we all know what happens in the tech world if you give your details to a commercial company, the salesdroids start pestering you for an upsell/cross-sell, you get put in mailing lists you never asked to be put on, etc.
The team behind Matrix.org talked about a similar problem in one of their FOSDEM’25 talks: commercial vendors free loading on development.
I don't know what kind of community spirit you should expect for a project maintained by a single company like this.
It doesn't cost them anything metophorically or physically.
Updating the APT repo is a CI/CD script away. No doubt they do it for the commercial side anyway, so the tweak in the CI/CD is probably no more than a couple of variables.
And open-source projects can usually get freebies from everything they need. So there's unlikely to be much if anything in terms of actual hard-dollars expense for running the repos.
In reality that's really no different than the pre-internet age. If you don't want your stuff intercepted, you need to encrypt it by means that aren't trivial to access electronically for a major security apparatus. Physical notes, word-of-mouth, hand signals, etc.
Also, you need to be ready for the consequences of what you say and do online should a state actor decide to allocate the resources to actually act upon the data they have.
Lets not even talk about the mess of MF Tandems or almost every carrier barebacking the web by slinging raw unencrypted UDP SIP traffic over the internet...
It is possible to build secure systems in this space, but instead we have almost every major telecom carrier running proprietary unmodifiable platforms from long dead companies or projects (Nortel, Metaswitch,etc) and piles of technical debt that are generally worse than the horribly dated and unpatched equipment that comprises their networks.
When I asked a popular VoIP carrier about this a while back, they argued that unencrypted connections were fine because the PSTN doesn't offer any encryption and they didn't want to give their customers a false sense of security. While technically true, this doesn't mean we shouldn't at least try to implement basic security where we can - especially for traffic sent over the public Internet.
It'd be lovely to see some nations of the world pour some serious money into the various Linux Foundation (or other open source) telco & cellular projects.
Is that not a kind of business/enterprise thing?
"Telecom" to me is like a network core equipment and radio towers - https://www.cisco.com/c/en/us/products/wireless/pgw-packet-d...
The "everything sucks and there's no motive to fix it" was a synopsis because, frankly, those conversations get really hard to follow if you don't know the jargon. And I didn't feel like trying to explain it poorly (since I don't understand the space that well, myself), so I left it at what I wrote.
(I didn't expect Hacker News to notice my blog at all.)
[1] - cape.co
Your blog actually gets posted somewhat regularly [0]. I actually remembered it, because it’s one of the rare cases where I like the "cute" illustrations.
[0]: https://hn.algolia.com/?q=https%3A%2F%2Fsoatok.blog%2F
There's just no way it can be insecure. Right.
Security in the telecom space has been a point of priority for the past couple of years and things are improving, but there are decades of legacy hardware, software, firmware, and network designs to cover for.
I doubt anyone is running standard Freeswitch in their telco networks, but the problems Freeswitch has as an aging telecom product with a history of not looking for security as much as one might expect those are all over the place.
It's very nice that the author spent their free time looking at code, found a bug and reported it -- I don't want to discourage that at all, that's great. But the fact that one maintainer of one piece of software didn't bow and scrape to the author and follow Security Industry Best Practises, is not a strong basis for opining that "Telecom security sucks today" (even if it does)
If someone came to you with a bug in your code, and they didn't claim it was being actively exploited, and they didn't offer a PoC to confirm it could be exploited... why shouldn't you just treat it as a regular bug? Fix it now, and it'll be in the next release. What's that? People can see the changes? Well yes, they can see all the other changes too. Good luck to them finding an exploit, you didn't.
The same thing happens in Linux distros. A security bug gets reported. Sometimes, the upstream author is literally dead, not just intransigent. If you want change on your own timeline, make your own releases.
Edit: 468 according to Shodan. I'm wondering if senddirectorydocument gets used at all by the XML RPC module.
Many a "bulk SMS" provider in places like the richer carribean islands, and Indonesia that do a lot more than send spam.
So, it is old. 2G was designed in the 90s.
I don't really know what people expect? I'm just happy it works at all, lol.
Today's CAMEL (MAP and CAP) signalling is an evolution of the ISDN signalling which traces it roots back into (amongst others) the SINAP signalling protocol and the SS7 network stack from even before that.
SS7 is early 1970s stuff. From a more innocent time.