It took them 67 days to disclose that their premier product, which is used heavily in the industry, had been compromised. Does anyone know why it seems like we're seeing disclosures like this take longer and longer to be disclosed? I would think the adage "Bad news travels fast" would apply more often in these cases, if only to limit the scope of the damage.
I can't help thinking that a part of it is that the supreme court has proactively & progressively been watering down the threat of class actions (in general, not specific to tech) since the early 2010s.
Sony & many others have proved pretty comprehensively that brand reputation isn't really impacted by breaches, even in high profile consumer facing businesses. That trickles down to B2B: if your clients don't care, why should you.
That leaves legal risk as the only other motivating factor. If that's been effectively neutered, it doesn't make economic sense for companies to do due diligence with breaches.
As far as I'm aware, Yahoo were the last company to suffer any significant impact from the US legal system due to a breach.
Their customer base are enterprise, so the issue can be addressed in private channels. There's little to be gained from making this particular breach public, from their point view. If anything, it's F5 customers who should advise their own customers downstream about the risks, when risks apply. Disclosure: I'm affected by this breach downstream at several sites and we have not been informed of risks by anyone but have been fighting fires where F5 was involved, but not necessarily blamed for anything.
But you are right, at F5's size and moneys, incentives for public disclosure are not aligned in the public's favor. Damage control, in all its meanings, has taken priority lately over transparency.
Just to be clear, the attackers had access to the systems well before this date.
Sometimes when a company engages law enforcement, law enforcement can request that they not divulge that the company knows about the problem so that forensics can begin tracking the problem.
I won't speak how often or how competent law enforcement are though, but it can happen.
My understading is that the hackers had a copy of the source code for their app so they had to patch all their outstanding CVE that they where sitting on so the DOJ let them hold back until that was ready. It's not ideal but I suppose there is at least something people can do right now. Feels like they could have been a bit quicker with some of the information though.
In October 2025, F5 rotated its signing certificates and keys used to cryptographically sign F5-produced digital objects.
As a result:
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later are signed with new certificates and keys
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later contain new public keys used to verify certain F5-produced objects released in October 2025 and later
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later may not be able to verify certain F5-produced objects released prior to October 2025
BIG-IP and BIG-IQ TMOS product versions released prior to October 2025 may not be able to verify certain F5-produced objects released in October 2025 and later
I wonder if there's a bet to be made on future 8K disclosures following quietly updated signing keys. A bet against F5 placed this morning would've only made 3.6%.
Not so much irony as it's a great vector to get inside an org. Security / monitoring agents that you deploy everywhere and don't suspect when you see they exfiltrate data, since you're expecting the telemetry anyway.
Every time some security compliance goon comes by telling me to install an agent on all of our servers to meet some security compliance requirement, I remind them that they are asking me to install a backdoor on our servers and handing the keys to a 3rd party.
F5 claims that the threat actors' access to the BIG-IP environment did not compromise its software supply chain or result in any suspicious code modifications.
Why would anyone have confidence in F5’s analysis?
I think it is more valuable for the attackers to have exfiltrated their code and analyze it for vulnerabilities.
Adding some malicious code to the BIG-IP software would require a long time for the attackers to persist in f5's systems undetected until they understood the current code. Not a zero percent chance, but pretty unlikely.
I mean, because it depends where the attack happened. Working with large companies like this in CI/CD there are a number of tools that the source code gets checked on, but not fed back into the system that could have been the source of the attack.
Not sure why I'm downvoted. Literally quoted from their incident page.
> We have confirmed that the threat actor exfiltrated files from our BIG-IP product development environment and engineering knowledge management platforms. These files contained some of our BIG-IP source code and information about undisclosed vulnerabilities we were working on in BIG-IP.
> We have no knowledge of undisclosed critical or remote code vulnerabilities, and we are not aware of active exploitation of any undisclosed F5 vulnerabilities.
I'm not sure if item #2 in the linked advisory ("identify if the networked management interface is accessible directly from the public internet") indicates whether compromise is only likely in that situation or not, but... lots of remote workers are going to have some time for offline reflection in the next week, it seems regardless.
Sony & many others have proved pretty comprehensively that brand reputation isn't really impacted by breaches, even in high profile consumer facing businesses. That trickles down to B2B: if your clients don't care, why should you.
That leaves legal risk as the only other motivating factor. If that's been effectively neutered, it doesn't make economic sense for companies to do due diligence with breaches.
As far as I'm aware, Yahoo were the last company to suffer any significant impact from the US legal system due to a breach.
But you are right, at F5's size and moneys, incentives for public disclosure are not aligned in the public's favor. Damage control, in all its meanings, has taken priority lately over transparency.
completely missed your point
Sometimes when a company engages law enforcement, law enforcement can request that they not divulge that the company knows about the problem so that forensics can begin tracking the problem.
I won't speak how often or how competent law enforcement are though, but it can happen.
https://my.f5.com/manage/s/article/K000157005
In October 2025, F5 rotated its signing certificates and keys used to cryptographically sign F5-produced digital objects.
As a result:
https://www.cisa.gov/news-events/directives/ed-26-01-mitigat...
Is it just me?
Why would anyone have confidence in F5’s analysis?
Adding some malicious code to the BIG-IP software would require a long time for the attackers to persist in f5's systems undetected until they understood the current code. Not a zero percent chance, but pretty unlikely.
It seems more likely that we do not KNOW how the access was used.
They claim the vulnerabilities discovered through the exfiltration were not used though.
> We have confirmed that the threat actor exfiltrated files from our BIG-IP product development environment and engineering knowledge management platforms. These files contained some of our BIG-IP source code and information about undisclosed vulnerabilities we were working on in BIG-IP.
> We have no knowledge of undisclosed critical or remote code vulnerabilities, and we are not aware of active exploitation of any undisclosed F5 vulnerabilities.
https://my.f5.com/manage/s/article/K000154696
I don’t know why, but this sounds a bit like backdoors.
Dead Comment