What a great article. Very easy to follow. The best part was that instead of attacking the messenger and denying any problem, Cox seem to have acted like the very model of responsible security response in this kind of situation. I'd love to read a follow up on what the bug was that intermittently permitted unauthorised access to the APIs. It's the kind of error that could easily be missed by superficial testing or depending on the reason behind the bug, perhaps not even experienced in the test environment.
> Cox seem to have acted like the very model of responsible security response in this kind of situation
It's hard to imagine, but I wish they would have taken advantage of him walking in with the compromised device in the first place.
I once stumbled upon a really bad vulnerability in a traditional telco provider, and the amount of work it took to get them to pay attention when only having the front door available was staggering. Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
>Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
I can't really blame them. The number of customers able to qualify that a device has actually been hacked is nearly zero. But do you know how many naive users out there that will call/visit because they think they've been hacked? It's unfortunately larger than the former. And that'll cost the business money. When 99.9% of those cases, the user is wrong. They have not been hacked. I say this as someone who supported home users in the 2000s. Home users that often think they'd been "hacked".
> Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
He probably should have gone the responsible disclosure route with the modem too. Do you really expect a minimum wage front desk worker to be able to determine what’s a potential major security flaw, and what’s a random idiot who thinks his modem is broken because “modern warfare is slow”?
> the amount of work it took to get them to pay attention when only having the front door available was staggering.
I've seen this across most companies I've tried reporting stuff to, two examples.
Sniffies (NSFW - gay hookup site) was at one point blasting their internal models out over a websocket, this included IP, private photos, salt + password [not plaintext], reports (who reported you, their message, etc), internal data such as your ISP and push notification certs for sending browser notifications. First line support dismissed it. Emails to higher ups got it taken care of in < 24 hours.
Funimation back in ~2019(?) was using Demandware for their shop, and left the API for it basically wide open, allowing you to query orders (with no info required) getting Last 4 of CC, address, email, etc for every order. Again frontline support dismissed it. This one took messaging the CTO over linkedin to get it resolved < a week (thanksgiving week at that).
> Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Sounds to me like their support org was reasonably effective at their real job, which is keeping the crazies away from the engineers.
It's even harder for me to imagine them saying "Oh, gee, thanks for discovering that! Please walk right into the office, our firmware developer Greg is hard at work on the next-gen router but you can interrupt him."
> Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
They were presented with some random person who wanted to get a new modern on their rental but also keep the old one, for free. They had no way of knowing if they were an actual security professional.
Totally agree, an easy read and a great reaction by Cox. I also like that the discovery and the bug itself were not communicated in a negative or condescending way, which is sometimes the case.
They have a pretty good looking responsible disclosure program which I’m assuming he checked first - it’d be surprising for someone who works in the field not to have that same concern:
I assumed they offered a bounty for bug disclosure? You mean to tell me that an internet provider with 11 billion in revenue can't pay someone that found a bug impacting all their clients?
Frankly he could have just sold the vulnerability to the highest bidder
There were 4 occurrences of the word "super" in an article with more than four thousand words in it, there is no need for "etc." you quoted all the occurrences since "super curious" was used twice.
What sucks about this situation is when your ISP forces you to use their modem or router. For example, I have AT&T fiber and it does some kind of 802.1X authentication with certificates to connect to their network. If they didn't do this, I could just plug any arbitrary device into the ONT. There are/were workarounds to this but I don't want to go through all those hoops to get online. Instead, I ended up disabling everything on the AT&T router and have my own router that I keep up to date plugged into that. Unbeknownst to me, the AT&T router could be hacked and I would never notice unless it was adversely affects my service.
If you have the att fiber with the ONT separate from the modem, it's really easy to bypass 802.1X. Plug an unmanaged switch in between the modem and the ONT; let the modem auth; disconnect the modem. You'll likely need to do that again if the ONT reboots, but at least for me, ATT a UPS for the ONT, so reboot frequency should be low.
Personally, I built up a rube goldberg of software and hardware with bypass nics so if my firewall was off (or rebooting), traffic would flow through their modem, and when it was on, my firewall would take the traffic and selectively forward traffic through from the modem, but there's really no need for that when you just use an unmanaged switch. I can find the code if you're interested (requires FreeBSD), but you sound more sensible than that ;)
That's a good idea, I do have an extra UPS/switch I can use for this. In the past when I was a bachelor and had more free time, I used to run my own FreeBSD server with pf and other services running in jails. Now that I am settled down, I just want to make things as idiot proof as possible in case there is an Internet issue at home and another family member needs to fix it.
The XGS-PON workaround that DannyBee looks promising though:
If you have a router running PfSense Plus* and at least 3 ports, Netgate actually has pretty detailed instructions for how to do the bypass with their layer 2 routing feature. It sounds a bit complicated, but I followed along exactly as it says and it just worked for me. Has been 100% reliable for almost 2 years, and I get significantly better speed (something like 10-20% vs the built in "passthrough" mode on the gateway, iirc). Plus I managed to cut the suspicious DNS server the gateway tries to interject out of my network.
How does that bypass 802.1x? Are the 802.1x packets are responded to by the official modem still? I was under the impression all packets were encrypted or signed with 802.1x, but Ive never had to implement or test it so I could be wrong.
The CPE AT&T router potentially getting hacked doesn't make much difference if you have your own router between your network and the AT&T network. Even if we removed the AT&T CPE router, you'd still be connecting to a black box you don't control that could be hacked or doing any number of inspections on your traffic.
It does matter since it lets an attacker be between your network and the internet. If that black box is a modem- yes it could be hacked, but (maybe luckily for me) the providers I've used don't expose many services from the modem on the public interface so it's much more difficult to compromise. You'd either have to come from the docsis network or the client network.
Fortunately, Cox isn't one of these. Any sufficiently modern DOCSIS modem, appropriate to the speed of service you subscribe to, is accepted.
Unfortunately, my praise of Cox ends there. I've been having intermittent packet loss issues for 2 years, and there doesn't appear to be a support escalation path available to me, so I can't reach anyone that will understand the data I've captured indicating certain nodes are (probably) oversubscribed.
Fwiw: the hoops are automated these days if you are on xgspon.
It's "plug in sfp+, upload firmware using web interface, enter equipment serial number"
You can even skip step 2 depending on the sfp stick you use.
The 802.1x state is not actually verified server side. The standard says modems should not pass traffic when 802.1x is required but not done. Most do anyway or can be changed to do so. AT&T side does not verify, and always passes traffic. That is what is happening under the covers.
It was mentioned by a sibling, but there are ways to connect without using one of AT&T's gateway devices. Different methods are catalogued on https://pon.wiki/
Like someone else mentioned, at some level you need to rely on your ISP and it is also a good idea to have a router in between anyway.
I would like to bypass the BGW320 because not only it is a large, power hungry box, but it also requires me to jump through hoops to get IPV6 working with VLANs. I need to either use multiple physical links (simulating multiple devices) or simulate that using a VRRP hack, otherwise AT&T will not give out multiple ranges at all (and will not care about what I request). Under Comcast I didn't have to do any of that, I'd just carve out smaller IPV6 ranges, as many as needed.
That's why I'm not an AT&T customer. Spectrum lets me bring my own hardware, and they're the only other option in my area, so Spectrum gets my business. Plain and simple. Unfortunately, not everyone has the palatable solution that I have.
Spectrum remote manages your hardware even if you bring your own modem. This nearly entirely consists of deploying firmware updates once a decade, but they can also command other things like modem reboots.
Great read, and fantastic investigation. Also nice to see a story of some big corp not going nuclear on a security researcher.
I can't say for certain, and the OP if they're here I'd love for you to validate this - but I'm not convinced requests to the local admin interface on these Nokia routers is properly authenticated. I know this because I recently was provisioned with one and found there were certain settings I could not change as a regular admin, and I was refused the super admin account by the ISP. turns out you could just inspector hack the page to undisable the fields and change the fields yourself, and the API would happily accept them.
if this is the case, and an application can be running inside your network, it wouldn't be hard to compromise the router that way, but seems awfully specific!
> Cox is the largest private broadband provider in the United States, the third-largest cable television provider, and the seventh largest telephone carrier in the country. They have millions of customers and are the most popular ISP in 10 states.
That suggested to me that we shouldn't have ISPs that are this big. Cox is clearly a juicy target and a single vulnerability compromises, as an example from the article, even FBI field offices.
> After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse
Feel like author should have written "...they claimed to have investigated...".
I think the author wrote it up factually. Readers can make their own inferences, but Cox did share with him that the service he exploited was only introduced in 2023. Which suggests the security team did do some investigating.
I'm sure* they don't keep raw request logs around for 3+ years. I know what next steps I'd recommend, but even if they undertook those, they're not sharing that in the ticket.
(just based on industry experience; no insider knowledge.)
> After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse
Would you trust a thing they say? It seems their whole network is swiss cheese.
this is why everything gets logged to an S3 bucket under an AWS account that has only write permissions and three people are required to break into the account that can do anything else with that bucket. I don't know if that's what Cox has, but that's how it's architect it to be able to claim there's no history of abuse.
That's how it should be architected, but the article shows that Cox's network gives no thought to security so it's unlikely how it is architected. Even if the Cox answer is correct to the best of their knowledge, we can't rule out that attackers are inside the network wiping out their logs.
You can't just refuse to participate, especially if you're the one who started the whole conversation. At some point you say "this is what i have and it's better than before."
That was my first thought. That they didn't even find the original attack vector. But comments above this suggest something even worse they are in Cox's network actively wiping out their own logs.
Many routers require manual firmware updates. GL.iNet routers had several RCE (Remote Code Execution) vulnerabilities within the last 6 months. I advise you to have a quick look in your own router to ensure its not hacked, and possibly upgrade firmware.
As a typical user the noticeable symptoms for me were:
- internet speed noticeably slows down
- WiFi signal drops and personal devices either don't see it, or struggle to connect. At the same time the router is still connected to the internet
- router's internal admin page (192.168.8.1) stopped responding
I imagine many users haven't updated their routers and thus may be hacked. In my case the hacker installed Pawns app from IPRoyal, which makes the router a proxy server and lets hacker and IPRoyal make money. The hacker also stole system logs containing information about who and when they use the device, whether any NAS is attached. They also had a reverse shell.
Solution:
1. Upgrade firmware to ensure these vulnerabilities are patched.
2. Then wipe the router to remove the actual malware.
3. Then disable SSH access, e.g. for GL.iNet routers that's possible within the Luci dashboard.
4. Afterwards disable remote access to the router, e.g. by turning Dynamic DNS off in GL.iNet. If remote access is needed, consider Cloudflare Tunnel or Zero Trust or similar. There is also GoodCloud, ZeroTier, Tailscale, etc. I am not too sure what they all do and which one would be suitable for protected access remotely. If anyone has advice, I would appreciate a comment.
Consider avoiding GL.iNet routers. They do not follow principle of least privilege (PoLP) - router runs processes using root user by default. SSH is also enabled by default (with root access), anyone can try to bruteforce in (10 symbol password consisting of [0-9A-Z] and possibly might be more predictable). I set mine to only allow ssh keys rather than a password to prevent that. Despite running OpenWrt they are actually running their own flavor of OpenWrt. So upgrading from OpenWrt 21.02 to 23.05 is not possible at the moment.
from what I can gather from the post, the specific attack vector using "retry unauthorized requests until they are" is very easy to spot in logs. so even the most basic log policy that logs the path, ip, and status code is enough (i.e. default in most web servers and frameworks)
I mean, if you think about it from Cox's point of view — why would you disclose to someone outside the company if there had been history of abuse? Why would you disclose anything at all in fact?
Discovered this in a vendor’s API. They registered the current user provider as singleton rather than per-request. So periodically you could ride on the coat-tails of an authenticated user.
This is ridiculously easy to do inside scripting languages like javascript
function foo(token: string) {}
function bar(token: string) {}
function baz(token: string) {}
// hmm, this is annoying
let token;
.get((req) => { token = req.data.headers.token }
function foo() {}
It is even possible to do it by "accident" with only subtly more complicated code! I constantly see secrets leak to the frontend because a company is bundling their backend and frontend together and using their frontend as a proxy. This lack of separation of concerns leads to a very easy exploit:
If I'm using, say, Next.js, and I want access to the request throughout the frontend, I should use context. Next even provides this context for you (though honestly even this is really really scary), but before my code was isomorphic I could just assign it to a variable and access that.
At least in regards to the scaryness of the next provided global context, at least now node has AsyncLocalStorage which properly manages scoping, but plenty of legacy...
The entire ecosystem is awful.
From my distrust in bundlers, I'm now fuzzing within CI for auth issues. Hitting the server 10k times as fast as possible from two different users and ensuring that there is no mixup. Also, scanning the client bundle for secrets. I haven't had an issue yet, but I've watched these things happen regularly and I know that this sort of test is not common
I once seen a bug in a Django App which caused similar issues. Basically the app often returned a HTTP no content for successful calls from AJAX requests. So someone had DRYed that by having a global NoContentResponse in the file. The problem was that at some point in the Django middleware the users session token got affixed to the response - effectively logging anyone from that point on in as another user.
Found a similar big once. The API would correctly reject unauthenticated requests for exactly 10 minutes. Then, for exactly 1 minute, unauthenticated requests were allowed. The cycle repeated indefinitely. Would love to know what was going on on the backend...
In my experience this can be caused by a loadbalancer, for example not being able to route (properly) to servers in the pool or a difference in configuration/patch-level between them.
That was my thought, as well - it’s easy to imagine this calling some internal APIs which aren’t great, leading someone to toss a cache in front of a few calls but botching the logic so the cache is always populated even on errors. I’ve seen a few cases where people tried to bolt something like that onto Java/Python APIs designed to raise exceptions when validation failed so it was especially easy to overlook the error path because the code was written to focus on the successful flow.
It's happened with both of the G.hn powerline devices I've used; presumably they are all reskinned versions of the silicon vendor's firmware. You can send commands (including changing encryption keys and updating firmware) and sometimes they just go through.
Did they *pay* him? He kind of saved them, tipped them off to a complete compromise of their security infrastructure which was not trivial to discover. Looks like he got nothing in return for "doing the right thing". How insulting is that? What is their perception of someone walking in to their offices with this essential information? I guarantee his self image and their perception are very different. They see an overly caffeinated attention seeking "nerd" just handed them a 300k exploit in exchange for a gold star and then they ran like smeg to cover their asses and take all the credit internally. He feels like superman, goes home to his basement apt, microwaves some noodles and writes a blogpost. This is a perfect example why you never, never report a 0day.
Sam is a very famous security researcher, so I would be shocked if he wasn’t making upwards of $350,000 a year. These articles he writes make him a significant amount of money via reputation boost.
It's hard to imagine, but I wish they would have taken advantage of him walking in with the compromised device in the first place.
I once stumbled upon a really bad vulnerability in a traditional telco provider, and the amount of work it took to get them to pay attention when only having the front door available was staggering. Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
I can't really blame them. The number of customers able to qualify that a device has actually been hacked is nearly zero. But do you know how many naive users out there that will call/visit because they think they've been hacked? It's unfortunately larger than the former. And that'll cost the business money. When 99.9% of those cases, the user is wrong. They have not been hacked. I say this as someone who supported home users in the 2000s. Home users that often think they'd been "hacked".
He probably should have gone the responsible disclosure route with the modem too. Do you really expect a minimum wage front desk worker to be able to determine what’s a potential major security flaw, and what’s a random idiot who thinks his modem is broken because “modern warfare is slow”?
I've seen this across most companies I've tried reporting stuff to, two examples.
Sniffies (NSFW - gay hookup site) was at one point blasting their internal models out over a websocket, this included IP, private photos, salt + password [not plaintext], reports (who reported you, their message, etc), internal data such as your ISP and push notification certs for sending browser notifications. First line support dismissed it. Emails to higher ups got it taken care of in < 24 hours.
Funimation back in ~2019(?) was using Demandware for their shop, and left the API for it basically wide open, allowing you to query orders (with no info required) getting Last 4 of CC, address, email, etc for every order. Again frontline support dismissed it. This one took messaging the CTO over linkedin to get it resolved < a week (thanksgiving week at that).
Sounds to me like their support org was reasonably effective at their real job, which is keeping the crazies away from the engineers.
It's even harder for me to imagine them saying "Oh, gee, thanks for discovering that! Please walk right into the office, our firmware developer Greg is hard at work on the next-gen router but you can interrupt him."
“I’m a three star infosec General, if I’m contacting you it’s not to waste your time.”
They were presented with some random person who wanted to get a new modern on their rental but also keep the old one, for free. They had no way of knowing if they were an actual security professional.
Its stuff like this that company's should REWARD people for finding.
https://www.cox.com/aboutus/policies/cox-security-responsibl...
Frankly he could have just sold the vulnerability to the highest bidder
I would, too. Not sure we will ever learn. Maybe a load balancer config that inadvertently included "test" backends which didn't check authorization?
Dead Comment
Thank god most things use HTTPS these days.
Personally, I built up a rube goldberg of software and hardware with bypass nics so if my firewall was off (or rebooting), traffic would flow through their modem, and when it was on, my firewall would take the traffic and selectively forward traffic through from the modem, but there's really no need for that when you just use an unmanaged switch. I can find the code if you're interested (requires FreeBSD), but you sound more sensible than that ;)
The XGS-PON workaround that DannyBee looks promising though:
https://pon.wiki/guides/masquerade-as-the-att-inc-bgw320-500...
I probably could pay to upgrade my speed to 2Gbps and then downgrade it back to 1Gbps and keep the XGS-PON.
> https://docs.netgate.com/pfsense/en/latest/recipes/authbridg...
There's another method that doesn't require Plus called pfatt, but I'm not sure what the state of it is.
* Plus is the paid version, yeah I know I agree I don't like what they did with the licensing changes but that's a different story
Unfortunately, my praise of Cox ends there. I've been having intermittent packet loss issues for 2 years, and there doesn't appear to be a support escalation path available to me, so I can't reach anyone that will understand the data I've captured indicating certain nodes are (probably) oversubscribed.
It's "plug in sfp+, upload firmware using web interface, enter equipment serial number"
You can even skip step 2 depending on the sfp stick you use.
The 802.1x state is not actually verified server side. The standard says modems should not pass traffic when 802.1x is required but not done. Most do anyway or can be changed to do so. AT&T side does not verify, and always passes traffic. That is what is happening under the covers.
I would like to bypass the BGW320 because not only it is a large, power hungry box, but it also requires me to jump through hoops to get IPV6 working with VLANs. I need to either use multiple physical links (simulating multiple devices) or simulate that using a VRRP hack, otherwise AT&T will not give out multiple ranges at all (and will not care about what I request). Under Comcast I didn't have to do any of that, I'd just carve out smaller IPV6 ranges, as many as needed.
If you really care you can configure a VPN directly on the router, so nothing leaves the network unencrypted.
I can't say for certain, and the OP if they're here I'd love for you to validate this - but I'm not convinced requests to the local admin interface on these Nokia routers is properly authenticated. I know this because I recently was provisioned with one and found there were certain settings I could not change as a regular admin, and I was refused the super admin account by the ISP. turns out you could just inspector hack the page to undisable the fields and change the fields yourself, and the API would happily accept them.
if this is the case, and an application can be running inside your network, it wouldn't be hard to compromise the router that way, but seems awfully specific!
That suggested to me that we shouldn't have ISPs that are this big. Cox is clearly a juicy target and a single vulnerability compromises, as an example from the article, even FBI field offices.
> After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse
Feel like author should have written "...they claimed to have investigated...".
I'm sure* they don't keep raw request logs around for 3+ years. I know what next steps I'd recommend, but even if they undertook those, they're not sharing that in the ticket.
(just based on industry experience; no insider knowledge.)
Would you trust a thing they say? It seems their whole network is swiss cheese.
As a typical user the noticeable symptoms for me were: - internet speed noticeably slows down - WiFi signal drops and personal devices either don't see it, or struggle to connect. At the same time the router is still connected to the internet - router's internal admin page (192.168.8.1) stopped responding
I imagine many users haven't updated their routers and thus may be hacked. In my case the hacker installed Pawns app from IPRoyal, which makes the router a proxy server and lets hacker and IPRoyal make money. The hacker also stole system logs containing information about who and when they use the device, whether any NAS is attached. They also had a reverse shell.
Solution: 1. Upgrade firmware to ensure these vulnerabilities are patched. 2. Then wipe the router to remove the actual malware. 3. Then disable SSH access, e.g. for GL.iNet routers that's possible within the Luci dashboard. 4. Afterwards disable remote access to the router, e.g. by turning Dynamic DNS off in GL.iNet. If remote access is needed, consider Cloudflare Tunnel or Zero Trust or similar. There is also GoodCloud, ZeroTier, Tailscale, etc. I am not too sure what they all do and which one would be suitable for protected access remotely. If anyone has advice, I would appreciate a comment.
Consider avoiding GL.iNet routers. They do not follow principle of least privilege (PoLP) - router runs processes using root user by default. SSH is also enabled by default (with root access), anyone can try to bruteforce in (10 symbol password consisting of [0-9A-Z] and possibly might be more predictable). I set mine to only allow ssh keys rather than a password to prevent that. Despite running OpenWrt they are actually running their own flavor of OpenWrt. So upgrading from OpenWrt 21.02 to 23.05 is not possible at the moment.
Could also be the neighbours and their big microwave oven :)
Because they didn't have enough logging or auditing to start with, or no logs or audit data left since the hack.
I mean, if you think about it from Cox's point of view — why would you disclose to someone outside the company if there had been history of abuse? Why would you disclose anything at all in fact?
function foo(token: string) {}
function bar(token: string) {}
function baz(token: string) {}
// hmm, this is annoying
let token;
.get((req) => { token = req.data.headers.token }
function foo() {}
It is even possible to do it by "accident" with only subtly more complicated code! I constantly see secrets leak to the frontend because a company is bundling their backend and frontend together and using their frontend as a proxy. This lack of separation of concerns leads to a very easy exploit:
If I'm using, say, Next.js, and I want access to the request throughout the frontend, I should use context. Next even provides this context for you (though honestly even this is really really scary), but before my code was isomorphic I could just assign it to a variable and access that.
At least in regards to the scaryness of the next provided global context, at least now node has AsyncLocalStorage which properly manages scoping, but plenty of legacy...
The entire ecosystem is awful.
From my distrust in bundlers, I'm now fuzzing within CI for auth issues. Hitting the server 10k times as fast as possible from two different users and ensuring that there is no mixup. Also, scanning the client bundle for secrets. I haven't had an issue yet, but I've watched these things happen regularly and I know that this sort of test is not common