So, 1) a public service, 2) with no authentication, 3) and no encryption? (http only??), 4) sent every single response with a token, 5) giving full admin access to every client's legal documents. This is like a law firm with an open back door, open back window, and all the confidential legal papers sprawled out on the floor.
Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.
This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.
but google told me everyone can vibe code apps now and software engineers should count their days... it's almost as if there's more stuff we do than just write code...
I've seen a lot of job ads (Canva) lately that mandate AI use or AI experience, and as an AI company if they wanted that I think they would have put it in the ad.
For the record I think I may be fine with the insincerity of selling AI but not using it!
Basically what happened in the Vastaamo case in Finland [1]. Except of course it wasn't individual phone calls – it was mass extortion of 30,000 people at once via email.
if I remember correctly the attacker got caught in such a silly way
he wanted to demonstrate that he indeed has the private data. But he fucked up the tar command and it ended up having his username in the directory names, a username he used in other places on the internet
http-only makes it also very easy to sniff for LE if they decide to. This allows them to get knowledge about cases. Like, they could be scanning it with their own AI tool for all we know. In a free country with proper LE, this would neither be legal nor happening. But I am not sure the USA is remaining one, given the leader is a convicted felon with very dubious moral standards.
The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.
All the big tech companies are in the news every week. Everybody knows how bad they are. Their names are tarnished and yet everyone is still using their junk and they face zero repercussions when fucking up. I dont think things in the media would do any harm.
This is HN. We understood exactly what “exposed … confidential files” meant before reading your overly dramatic scenario. As overdone as it is, it’s not even realistic. A likely single mother is likely tiny potatoes in comparison to deep-pocketed legal firms or large corporations.
The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.
The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?
This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.
The reported incident is a great reminder of caveat emptor.
> Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software.
I don't...think this is true? Google has no problems shipping complex software projects, their London HQ is years behind schedule and vastly over budget.
Construction is really complex. These can be mega-projects with tens of thousands of people involved, where the consequences of failure are injury or even death. When software failure does have those consequences - things like aviation control software, or medical device firmware - engineers are held to a considerably higher standard.
> The private market is perfectly capable of performing this function
But it's totally not! There are so many examples in the construction space of private markets being wholly unable to perform quality control because there are financial incentives not to.
The reason building codes exist and are enforced by municipalities is because the private market is incapable of doing so.
The bigwigs at my company want to build out a document management suite. After talking to VP of technology about requirements I ask about security as well as what the regulatory requirements are and all I get is a blank stare.
I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.
But now I understand it’s not the developers who are incompetent…
Not only does the Peter principle generally show more incompetence the higher up a structure you move, but the outsized influence those positions have make for a very noticeably higher level of “things fucked up by incompetence“ coming from the C suite compared to the rest of the structure.
There’s definitely plenty of incompetence regardless. But I’ve never seen a company where the incompetence was more noteworthy in the cog positions than “leadership”.
I've had the same. Ask them to come up with a ToS and they're like "we'll talk about that in an upcoming meeting" it's been a few years now with nothing.
I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
In my experience, it comes down to project management and organizational structure problems.
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
Oh man this is so true. In this sort of org, getting something fixed out-of-band takes a huge political effort (even a critical issue like having your client database exposed to the world).
> Many security team people I've worked with were smart, but not software developers by trade.
A lot are people who cannot code at all, cannot administer - they just fill tables and check boxes, maybe from some automated suite.
They dont know what http and https is, because they are just paper pushers what is far from real security, but more like security in name only.
A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.
> A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.”
At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"
I've once had a whole sector of a fintech go down because one DevOps person ignored daily warning emails for three months that an API key was about to expire and needed reset.
And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.
It could also be someone "practicing good time management."
They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.
The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.
Think I'm kidding?
Many folks that have worked for large companies (or bureaucracies) have seen exactly this.
security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.
My favorite one is the "We've identified a security hole in your website"... and I always respond quickly that my website is statically generated, nothing dynamic and immutable on cloudflare pages. For some odd reason, I never hear back from them.
Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.
The security@ inbox has so much junk these days with someone reporting that if you paste alert('hacked') into devtools then it makes the website hacked!
I reckon only 1% of reports are valid.
LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.
I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.
My favorite was "we can trigger your website to initiate a connection to the server we control". They were running their own mail servers and were creating a new accounts on our website. Of course someone needs to initiate a TCP connection to deliver an email message!
Of course this could be a real vulnerability if it would disclose the real server IP behind cloudflare. This was not the case, we were sending via AWS email gateway
Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.
I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
I call that one of the worrisome outcomes from "Marketing Driven Development" where the business people don't let you do technical debt "Stories" because you REALLY need to do work that justifies their existence in the project.
Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.
Your responsibility is responsible disclosure.
Their responsibility is how to handle it. Don't try to decide that for them.
> I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.
SOC2 is mainly to check boxes, and forces you to think about a few things. There’s no real / actual audit, and in my experience the pen tests are very much a money grab. You’re paying way too much money for some “pentesting” automated suite to run.
The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.
When I worked for a consulting firm some years back I randomly got put on a project that dealt with payment information. I had never had to deal with payment information before so I was a bit nervous about being compliant. I was pointed to SOC2 compliance which sounded scary. Much to my relief (and surprise), the SOC2 questionnaire was literally just what amounted to a survey monkey form. I answered as truthfully as I could and at the end it just said "congrats you're compliant!" or something to that effect.
I asked my my manager if that's all that was required and he said yes, just make sure you do it again next year. I spent the rest of my time worrying that we missed something. I genuinely didn't believe him until your comment.
Soc2 and most other certifications are akin to the tsa, security theater. After seeing the info sec security space from the inside i can only say that it blows my mind how abhorrent the security space is. Prod db creds in code? A ok. Not using some stupid vendors “pen testing” software on each mr, blasphemy?
Unless im missing something, they replied stating they would look into it and then its totally vague when they patched, with Alex apparently randomly testing later and telling them in a "follow up" that it was fixed.
I dont at all get why there is a paragraph thanking their communication if that is the case.
The time to fix isn't really important, assuming that they took the system offline in the mean time... but we all know they didn't, because that would cost to much.
According to the timeline it took more than a week just for Filevine to respond saying they would review and fix the vulnerability. It was 24 days after initial disclosure when he confirmed the fix was in place.
If they have a billion dollar valuation, this fairly basic (and irresponsible) vulnerability could have cost them a billion dollars. If someone with malice had been in your shoes, in that industry, this probably wouldn't have been recoverable. Imagine a firm's entire client communications and discovery posted online.
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
While the FileVine service is indeed a Legal AI tool, I don't see the connection between this particular blunder and AI itself. It sure seems like any company with an inexperienced development team and thoughtless security posture could build a system with the same issues.
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
There is none. Filevine is not even an "AI" company. They are a pretty standard SaaS that has some AI features nowadays. But the hive mind needs its food, and AI bad as we all know.
Because it's the Cloud and we're told the cloud is better and more secure.
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
SaaS is now a "solved problem"; almost all vendors will try to get SOX/SOC2 compliance (and more for sensitive workloads). Although... its hard to see how these certifications would have prevented something like this :melting_face:.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
The first thing that comes to my mind is SOC2 HIPAA and the whole security theater.
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
SemiAnalysis made this a base requirement for being appropriately ranked on their ClusterMAX report, telling me it is akin to FAA certifications, and then getting hacked themselves for not enforcing simple security controls.
You have to start somewhere though. Security theater sucks, and it's not like compliance is a silver bullet, but at least it's something. Having been through implementing standards compliance, it did help the company in some areas. Was it perfect? Definitely not. Was it driven by financial goals? Absolutely. It did tighten up some weak spots though.
If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.
I'm less and less sure that when a billion-dollar company screws up this bad, the right thing to do is privately disclose it and let them fix it. This kind of thing just allows companies to go on taking people's money without facing the consequences of their mistakes.
It does. Most privacy laws are based on time-from-discovery. If they immediately sprung into action at the moment they were informed and remediated the issue, they're in compliance.
What would you suggest the right thing to do would be?
Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.
I guess if I imagine the ideal world, it would be that you report it to the authorities and they impose penalties on the offender that are large enough that the company winds up significantly worse off than if they had just grown more slowly. In other words the punishment for moving fast and breaking things needs to be bad enough to outweigh the gains of doing so.
In the current world, I dunno. I guess it depends on what the company is. If it's something like a hedge fund or a fossil fuel company I think I'd be fine with some kind of wikileaks-like avenue for exposing it in such a way that it results in the company being totally destroyed.
Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.
This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.
I've seen a lot of job ads (Canva) lately that mandate AI use or AI experience, and as an AI company if they wanted that I think they would have put it in the ad.
For the record I think I may be fine with the insincerity of selling AI but not using it!
Yes, but adding these common sense considerations is actually something LLMs can already do reasonably well.
[1] https://en.wikipedia.org/wiki/Vastaamo_data_breach
he wanted to demonstrate that he indeed has the private data. But he fucked up the tar command and it ended up having his username in the directory names, a username he used in other places on the internet
The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.
The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.
The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?
This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.
The reported incident is a great reminder of caveat emptor.
I don't...think this is true? Google has no problems shipping complex software projects, their London HQ is years behind schedule and vastly over budget.
Construction is really complex. These can be mega-projects with tens of thousands of people involved, where the consequences of failure are injury or even death. When software failure does have those consequences - things like aviation control software, or medical device firmware - engineers are held to a considerably higher standard.
> The private market is perfectly capable of performing this function
But it's totally not! There are so many examples in the construction space of private markets being wholly unable to perform quality control because there are financial incentives not to.
The reason building codes exist and are enforced by municipalities is because the private market is incapable of doing so.
Dead Comment
I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.
But now I understand it’s not the developers who are incompetent…
There’s definitely plenty of incompetence regardless. But I’ve never seen a company where the incompetence was more noteworthy in the cog positions than “leadership”.
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
A lot are people who cannot code at all, cannot administer - they just fill tables and check boxes, maybe from some automated suite. They dont know what http and https is, because they are just paper pushers what is far from real security, but more like security in name only.
And they joined the work since it pays well
At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"
And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.
They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.
The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.
Think I'm kidding?
Many folks that have worked for large companies (or bureaucracies) have seen exactly this.
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.
I reckon only 1% of reports are valid.
LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.
I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.
Of course this could be a real vulnerability if it would disclose the real server IP behind cloudflare. This was not the case, we were sending via AWS email gateway
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
1 week is surprisingly not that slow.
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.
Your responsibility is responsible disclosure.
Their responsibility is how to handle it. Don't try to decide that for them.
What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.
Also … shows you what a SOC 2 audit is worth: https://www.filevine.com/news/filevine-proves-industry-leade...
Even the most basic pentest would have caught this.
The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.
(I’m responsible for the SOC2 audits at our firm)
I asked my my manager if that's all that was required and he said yes, just make sure you do it again next year. I spent the rest of my time worrying that we missed something. I genuinely didn't believe him until your comment.
Edit: missing sentence.
I dont at all get why there is a paragraph thanking their communication if that is the case.
I wouldn't expect them to find any computer problems either to be honest.
They should have given you some money.
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
Deleted Comment
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
Point out one (1) "AI product" company that isn't described accurately by that sentence
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
https://jon4hotaisle.substack.com/i/180360455/anatomy-of-the...
It is crazy how this gets perpetuated in the industry as actually having security value, when in reality, it is just a pay-to-play checkbox.
If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.
Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.
In the current world, I dunno. I guess it depends on what the company is. If it's something like a hedge fund or a fossil fuel company I think I'd be fine with some kind of wikileaks-like avenue for exposing it in such a way that it results in the company being totally destroyed.