Readit News logoReadit News
tuetuopay · 2 years ago
French here, and working for another french CSP. We lived the OVH incident live and saw the whole aftermath.

OVH was held liable because of the data loss, not for the service interruption. Data loss is something irremediable, permanent, definitive. Some businesses were basically ruined from this incident because they had no more data to operate. To add insult to injury, they sold offsite backups in the datacenter literally meters away. A service interruption, well, shit happens, and this is handled by SLA contracts that both parties agree to. You don't ruin a business (read: close a company) for a few days of outage.

I doubt CrowdStrike will be held liable for much; from corporations at least. They cannot repay the damage done, or they close the door. The healthcare sector is another beast, but I think it will come to more regulations for critical entities.

dathinab · 2 years ago
IMHO it would send really wrong signals if this doesn't end up with CrowdStrike closing their doors...

like if the largest outage in history was caused by you due to a config parser failing and it looks as far as I can tell that they didn't follow industry best practices when it comes to config/parsing handling and probably also didn't follow some best practices when it comes to kernel module programming then honestly it would be really strange if you didn't had to declare bankruptcy due to damage payments (which doesn't mean the software is now gone/unmaintained, there are a lot of ways to make sure that doesn't happen, e.g. MS anyway had interest in buying Falcon).

627467 · 2 years ago
I understand that CS doesn't draw much sympathies even before this happened - from myself included - and it is easy to pin point systemic issues to a single failure point and make it liable and financially responsible for all downstream failures.

But this only creates excuses for all other responsible players in this systemic issue - or society at large. Just to pick 1 example: I keep reading comments on how profoundly health care providers were affected and that lead to human life losses.

I understand that having "tech" involved in health or any sector is important but are we really wanting to build critical services that grind to halt or have huge efficiency impact when a single vendor fails? Are these service providers not responsible for thinking about failure modes?

jb3689 · 2 years ago
I don't understand why there is so much attention on the deployment and testing side of the coin. Yes, better testing and rollout strategy should have prevented this specific occurrence of a failure. But these strategies aren't bulletproof and things go wrong. You need defense in depth, and some responsibility has to lay on the consumer side for that to happen - particularly for fundamental humane industries like transportation and healthcare. These industries should not be allowed to run any software like this - privileged and without controlled rollouts. I'm all for shaming CrowdStrike's lack of focus on reliability, which they deserve, however there's a bigger issue here of trying to avoid or mitigate risky dependencies in the first place that I hope we also get to explore.
JumpCrisscross · 2 years ago
> it would send really wrong signals if this doesn't end up with CrowdStrike closing their doors

I thought the same until I saw the damage estimates. They’re in the single-digit billions. That’s well below CrowdStrike’s market cap. Unless we’re going hard for retributive justice, liability should be enough.

saati · 2 years ago
They didn't follow testing or deployment best practices either.
fennecbutt · 2 years ago
So no responsibility on their clients having a uniform architecture where all key systems use CS software for protection, updates are auto applied and there are no backup systems etc?

If aws goes down in a region yeah it sucks but we fail over. If aws goes down ww then it's like well...sometimes that happens. If I've built critical infra like electric utilities, airports etc with swathing vulnerable points then that's the real problem.

sam_goody · 2 years ago
As I understand it, anyone with bitlocker irretrievably lost all their data.

Logic dictates that the more critical the data - the more likely it is gone. An informal anecdotal survey says that alot of users use bitlocker - which means a lot of data loss.

EDIT: I see that in many cases one can recover Bitlocker encrypted drives. I wonder how much real data loss there is.

altdataseller · 2 years ago
If a business closed down because of the OVH incident, how were the damages calculated? 1x annual revenue? Profit? 5x?
itunpredictable · 2 years ago
This headline is kind of misleading. It's actually someone's personal (educated) opinion on a blog, not a statement of fact. Should be something more like "I think CrowdStrike will be liable" or "CrowdStrike should be liable"
627467 · 2 years ago
the full headline (at this time at least) is more nuanced than seen here in hn: CrowdStrike will be liable for damages in France, based on the OVH precedent.
crtasm · 2 years ago
It's also in the URL. Submitter, please don't remove important parts of headlines.
jncfhnb · 2 years ago
Doesn’t really make it any less misleading. It is still just an opinion.
siva7 · 2 years ago
It's good to remind people that general liability waivers you often find with license agreements have no meaning outside of US jurisdiction if you're doing business in another jurisdiction.
GJim · 2 years ago
The number of US tech businesses that are surprised they need, or think they can ignore the need, to obey employment and data protection laws when working in other jurisdictions is simply bonkers.
xxs · 2 years ago
Well, it'd be a lot easier if most US entities understood that M/d/yy(yy) format is rare, or that default to Frankenstein degrees is pretty much the same/awkward (even Microsoft reset their weather widget to F on regular basis).

The root of issue, not understanding local laws/culture, is very similar - surrounded by a vast market/culture (US +Canada) dulls your senses for the rest of the globe.

KingOfCoders · 2 years ago
When working for a large US company they insisted we do not accept returns, they always were astonished that in Germany there is a law for 14 day returns, no questions asked. They could not understand that this is a law in Germany.
classified · 2 years ago
It's not unusual in the US to assume the US are the only planet in the universe.
1970-01-01 · 2 years ago
Yes however it goes both ways. TPB was excellent in telling the US lawyers to f-off: https://web.archive.org/web/20110623123349/http://thepirateb...
mtkd · 2 years ago
It appears to be an increasingly clear risk to LLM model vendors that EU cares about where personal data came from, how it is stored, where it is stored, how accurate it is and whether there is a mechanism for it to be removed
giantg2 · 2 years ago
I wonder if there is a site that covers common software license and has liability maps by country as to how much liability is waived based on the laws there.
lordnacho · 2 years ago
Surely, there must be a gigantic number of claimants already taking to their lawyers about how to get compensation? Not just in France but across the planet?

I wonder how this kind of thing is organised, since there's all these jurisdictions.

lukan · 2 years ago
"I wonder how this kind of thing is organised, since there's all these jurisdictions."

In theory simple. Crowdstrike is doing buisness in state X, so compensation claims will be settled in court in state X. So lots of courts and lawers all around the world, will be quite busy for some time with the case.

sjamaan · 2 years ago
It's a B2B tool, which means it's quite likely the contract/license states that all disputes are to be settled in a court appointed by them. This is not valid for consumer disputes, but businesses are free to do what they want. Perhaps this will let them off the hook?

OVH is different in that it's actually a French company.

anonzzzies · 2 years ago
I cannot see how they will get over this... It's CIO snakeoil to begin with, but this was not a simple mistake; it shows the entire lack of process and responsibility.
b112 · 2 years ago
And if judgements are found, local to the jurisdiction assets (including and money being forwarded by banks, eg via credit cards, or wire transfers) can be seized.

If that doesn't work, judgements can be registered in other courts for collection purposes.

Retr0id · 2 years ago
I'm not a lawyer, and I'm definitely not a French lawyer, but I don't think the OVH comparison is valid.

In the OVH case, their backup system (as a whole) failed. Many customers were left with 0 data, and per the article "the court ruled the OVH backup service was not operated to a reasonable standard and failed at its purpose".

Meanwhile CrowdStrike "just" crashed their customer's kernels, for a duration of about 1 hour (during which they were 100% safe from cyber attacks!). Any remaining delays getting systems back online were (in my view) due to customers not having good enough disaster recovery plans. There's certainly grounds to argue that CrowdStrike's software was "not to a reasonable standard", but the first-order impacts (a software crash) are of a very different magnitude to permanently losing all data in a literal ball of fire (as in the OVH case).

Software crashes all the time. For better or for worse, we treat software bugs as an inevitability in most industries (there are exceptions, of course). While software bugs are the "fault" of the software vendor, the job of mitigating the impacts thereof lies with the people deploying it. The only thing that makes the CrowdStrike case newsworthy, compared to all the other software crashes that happen on a daily basis, is that CrowdStrike's many customers had inserted their software into many critical pathways.

CrowdStrike sells a playing card, and customers collectively built a house with them.

(P.S. Don't treat this as a defense of CrowdStrike. I think their software sucks and was developed sloppily. I think they should face consequences for their sloppiness, I just don't think they will, under current legal frameworks. At best, maybe people will vote with their wallets, going forwards.)

vesinisa · 2 years ago
> for a duration of about 1 hour

Not even remotely correct.

Most computers that were affected by the fault needed physical remediation via safe mode boot to fix the issue because they were not able to download a fix because of being stuck in a reboot loop. The understanding is that for most cases, the fix needed to be applied by an IT technician dispatched to physically access the computer.

A week or 168 hours later, there are still many, many computers out there that remain bricked by this fault because it is so heinously difficult to fix.

balderdash · 2 years ago
For what it’s worth - I got the BSOD, once I got the email from IT with the instructions, it took me about 20min to apply the fix. Almost all of the company employees who were affected were able to easily apply a self help fix.

I could imagine this was not the case if you had to physically access remote servers, or didn’t have access to bit locker recovery keys

Retr0id · 2 years ago
See the sentence I wrote just after that one.
fabian2k · 2 years ago
It didn't just crash, it crashed 100% of computers running it at that time and in a way that required physical intervention to fix. So I think you can considers this quite different from regular crashes because recovery is much more difficult and because it affected a lot of computers simultaneously.

On top of that there are companies that had failures of their own in their recovery procedures. But even with good procedures this can be a significant outage because it is not trivially reverted and would typically affect many configurations that are redundant for many other failures.

Retr0id · 2 years ago
If the uptime of 100% of your computers depends on a single vendor not writing software with bugs in it, you have a problem.
jeffrallen · 2 years ago
It should not be your vendor that triggers your disaster recovery plans. It should be you know, a disaster, that does.
dotancohen · 2 years ago
Can someone explain to me why the protections that Falcon provides, are not provided by the OS itself? I am not completely naive, I've secured quite a few critical Linux servers, but with Windows it seems that there do not exist the same clear roles of security. Contrast with Red Hat or even Canonical, where is feels like I'm (correctly) fighting the security of the systems to get them into a state where my users can use my applications.
Patient0 · 2 years ago
I read an article that stated that Microsoft lost an anti-trust court case against the EU in which the EU mandated that they allow third party competitors to provide this service. Microsoft has its own solution called Windows Defender.

https://www.theregister.com/2024/07/22/windows_crowdstrike_k...

wkat4242 · 2 years ago
It's more nuanced than that. They have to provide the same APIs to third party security vendors that they use themselves.

They can come up with something more shielded as Apple has done, they just have to eat their own dog food and can't make an exception for defender. That's all.

Blaming the EU here is pure spin.

simiones · 2 years ago
Falcon provides many levels of protection (in principle - in practice, given the extreme incompetence demonstrated in this case, I doubt they do much more than sell snake oil), some of which have OS-native alternatives, some of which do not, and most of which Linux definitely doesn't have built-in. For example, the Linux kernel team doesn't have a DB of known malware signatures that the kernel or init system runs or shell runs any new software component against - Falcon does this. Another example - neither Linux nor any common Linux userspace natively integrates with with a fleet management system to check if the current user is allowed to run a particular piece of software. And there are many other similar questions.

Finally, even when the OS does natively provide services like these (Enterprise versions of Windows do provide all the features I mentioned above), it's perfectly reasonable to prefer a different vendor for those solutions. Maybe people trust CrowdStrike's malware signature lists more than they do Microsoft's, for example: a good reason to buy CrowdStrike instead of using Windows Defender.

I'm not trying to defend CrowdStrike or Windows here. But I think it's obvious that there are many features that fall under the umbrella of security that you wouldn't want to build into the OS itself, and even when a version of them exists built-in, that a company may wish to source from a different vendor.

tyingq · 2 years ago
Windows does have Defender, which does some amount of tracking signatures and heuristics of various types of malware.

It has not, however, proved enough to fend off different real world problems like ransomware.

Hence, the market for 3rd party solutions that are more aggressive. And to keep up with real world threats, they have to update often. And have to run at high privilege levels. So now you have the situation where those third-party solutions have the ability to create a bsod and/or a boot loop. Which should mean that they have a very well thought out way to roll out updates.

varispeed · 2 years ago
Very much every 3rd party anti-virus software I tried (and paid for) caused data loss or other problems (a few catastrophic) in the long run. One product didn't even stop a virus getting in.

Since then I just use Defender and never had any trouble or a virus or ransomware. Only issue is that sometimes the antimalware service takes a lot of CPU.

dagaci · 2 years ago
Microsoft has a high share in this area but enterprise security is generally a very competivite market. Microsoft may even move into #1 position as a fallout from this debacle becasue the market share between them and the #1 CS is very small (that does not mean people actually buy more Ms btw... if that needs to be said ;)

This is not neccesarily a good thing for MSFT as it will 100% trigger regulator rage in the EU.

https://www.statista.com/statistics/917405/worldwide-enterpr...

jpambrun · 2 years ago
I read that a lot, but nobody ever provide supporting evidence. To me, this sounds a bit like 3rd party security marketing being really effective.
papichulo2023 · 2 years ago
But randsomware is mostly targeted to servers, many of the devices affected were clients
LikesPwsh · 2 years ago
You can do dangerous actions in user space without any need for escalated permissions.

E.g. downloading a file and running the contents as code, or uploading/encrypting all files you have access to.

Crowdstrike and Defender handle those possible but suspicious actions.

blablabla123 · 2 years ago
While Clowdstrike Falcon EDR is in some sense an AV on steroids and Crowdstrike not only does EDR. While they are obviously deployed on lots of systems, less than 1% of Windows systems means it still operates in an absolute niche. Most people didn't know CS even fewer know any of the competitors.

I think one massive difference between CS and AV is also, you don't expect a human to be in the loop because it would be too expensive. Nor would it be feasible for consumer software because of privacy.

Also even within this small niche, the solutions are very heterogeneous and make little sense for single boxes - in fact may even be designed to run on a network level.

commandersaki · 2 years ago
How do you actively detect a malware agent running in user space using stealth or a kernel. Authors of such are fully aware of Linux hardening like SELinux / AppArmor and work around it.
amluto · 2 years ago
> How do you actively detect a malware agent running in user space using stealth or a kernel.

You start with correct design.

The system has a root of trust (ideally you skip the insane level of complexity that is Secure Boot + TPM and use something simple, testable, and verifiable — this isn’t actually that hard). Only authorized images will boot, and, more importantly, nothing else on the network trusts the machine until it proves it’s running the right image.

Then you make the image immutable. Want to edit a system file? You can’t. Maybe in developer mode you can edit an overlay.

All configuration is stored in a designated place, and that configuration is minimized. A stock image from the distro vendor has zero configuration, so there is no incomprehensible soup in /etc to audit. Configuration is also attested.

Persistent data is separate from configuration. All persistent data is considered suspect. Any bug that allows malicious persistent data to compromise anything is a blocker, including corrupt filesystem metadata.

A root-of-trust attestation has limited lifetime. The system forcibly re-verifies periodically. This either means rebooting or doing a runtime “dynamic root of trust” attenuation. The latter is complex.

Complicated messes like kernel “lockdown” and the stock Secure Boot signatures have no place. Usermode root and the kernel are approximately equally trusted. SELinux is barely necessary, if at all, unless the actual user code wants it to control access to persistent data. But there are simpler, better schemes that are easier to reason about.

Sadly the industry doesn’t think this way. I’m regularly surprised that Apple hasn’t gone in this direction more aggressively than they are with their MacOS products.

worthless-trash · 2 years ago
If you have good examples, I'd love to see it, A writeup even more so on the techniques they used. My findings so far in the wild (and on my honeypot) is really amateur level garbage.

I spent a weekend and abused a c&c infrastructure server to fix the clients and remove the flaw and malware. I see very little sophistication there.

Deleted Comment

fulafel · 2 years ago
This is a much harder problem than prevention, which is what the OS should be doing.
worthless-trash · 2 years ago
> How do you actively detect a malware agent running in user space using stealth

Depending how advanced the attacker is, check the executing binary maps back to the actual expected name and location on disk. Make sure the executable and libraries used at runtime are the correct ones matching hashes of known good qualities.

Ensure the process tree structure has an expected structure, ie "bash" isnt starting a process called apache.

Make sure the selinux policy is correct for the process that is running. (I have no idea about apparmor)

Check to see if its linking to the expected binaries, that its not using 'hidden' files (starting with a dot or directory with a dot), or deleted files.

Confirm that the process is opening sockets and files that. you expect it to (ie, apache shouldn't open files that are outside its configuration directive).

The process should not be making outgoing socket connections unless it is a client.

It should not be running with capabilities(7) that it does not require. It should not be executing from a setuid binary.

Check the process name, quite often attackers rename the running executable, so you'll see /proc/pid/cmdline renamed with a bunch of null bytes at the end.

Some malware has 'anti debugging' tactics, ie, they have traced themselves to prevent you tracing them, you can find this as one of the lines in /proc/pid/status iirc.

There are more, but thats the few off the top of my head.

> or a kernel.

This is a MUCH harder problem, because attackers can always disable any security mechanism assuming they kernel code execution. However, assuming they are not too focused..

If the system is booting in secureboot mode, it should be enabled, and no extra / unused / out of date kernel modules loaded.

I know that code injection at the memory level means that attackers can inject unsigned code, so in this case you would want to periodically sample the code and ensure that execution context would only have the processers EIP in known areas where the kernel would map executable code. You could do an additional check to see if the areas are mapped by userspace processes (it might be too late) so you can find offending attackers.

If the host is virtualized, this becomes easier to do and mapping and comparing memory from the guest kernel for the executable code sections means that its harder for an attacker to work around by being able to disable a mechanism.

Usually attacker kernel exploits do not persist long temr in kernel space, (they abuse kernel space to allow for userspace privilege escalation ie make a binary setuid or modify permissions on a /dev/) because the longer they are there the more likely they are to panic the system.

Some of the more advanced attacks I have seen are from people uploading system kernel panic images, where I have a 'snapshot' of the running system and can work around attackers mitigation techniques.

sim7c00 · 2 years ago
Linux can't be secured out of the box to do anything that Falcon does. If you use AuditD, eBPF and things like GRSecurity patches you might get into a good state, but it's still not the same thing at all. it might be secure depending on your linuxfoo, but it's not the same thing as running EDR which will help correlate system behavior across different systems etc. and look with much more depth into process behaviors and system interactions.

Also, you don't want operating systems to provide this actual EDR program. They need to provide the facilities for EDR vendors / creators to tap into and do their work properly. You don't want a butcher to rate their own meat... you want a third-party to do this. As Example: MS Defender is totally rubbish (general sentiment for a lot of people in security, hence they run falcon or cortex XDR etc.) at defending Windows.... and it's by Microsoft. They should focus on building an auditable OS and let auditors do the auditing...

The best thing imho is a tool like CSF but integrated with network appliances (which CS doesn't do i think), which is where the strength of such tooling really comes together, correlating network data / behaviors to endpoint behaviours and having a full 'causality chain' of processes / systems and network traffic invovled in an attack.

And you are right on the balance of security being dramatic. using crypto is still hard as ever, and allowing external parties to interact with your users is just impossible to do right (let alone have users in the right awareness mode). This last is a problem of security industry imho, making tools so difficult.

Someday maybe rather than EDR tools and firewalls, cybersecurity companies will deliver 'secure business services' which are easy to use, userfriendly services that are secure by default. - maybe in like the year 3042.

hsbauauvhabzb · 2 years ago
I disagree that windows defender is trash. Its’ initial introduction mitigated a lot of malware problems of the early 2000’s.

Sure, it may not be the best, but most vendor solutions aren’t either. Case study: crowdstrike.

Kostchei · 2 years ago
not to defend its "you must accept updates" insane /inane fail, but, the suite of crowdstrike inc falcon stuff we have enables the response side of EDR pretty well, and for a mixed windows, linux, mac shop, where we would like the same agent on all systems, it does a better job than most. Not as good as Jamf on Mac mind you, but better than than most "windows ecosystem". And if you run jamf for policy and detection, but not response, you sort of get it all. So, that's why not "just defender" - at 10k+ systems the anti-malware is just the beginning. What do you do when that fails and ...yeh.. anyway.. there is more to it.

As to why windows is not more locked down- that's on the shoulders of the admins. But out of the box, you are right, it is to permissive. But apparently users and management like it that way.

fulafel · 2 years ago
There are 2 possible questions.

(1) - Why is a crutch like "anti-virus" software needed? Essentially trying to reactively cat-and-mouse hostile software that the OS has let execute on the computer.

(2) Why doesn't Windows provide AV?

Question (1) is more interesting - and (2) is addressed by other comments.

I think both MS and their customers have very seldom prioritized security over even small compromises in functionality. We loudly blame MS but they are the vendor MS customers deserve. While it's not a democracy, there are parallels to the popular sport of blaming politicians for eg not doing hard choices against climate change while holding the voters innocent.

simiones · 2 years ago
The cat-and-mouse game is between OS security features and hackers. AV software is not a crutch, it's an extra level of defense. All OS kernels are vulnerable to malware - this is a 100% given at this moment in history. The question is how to mitigate this problem, and AV is one component of that, as are firewalls, network-level intrusion prevention systems, and a whole host of other security software.

Maybe some day someone will write an OS that is "fully secure" and then they'll be able to confidently run a system whose users can confidently click a link in an email, download an .exe from there, and run it, without fear of losing or leaking a single bit of data. That day is definitely not here, and until then, we all do the best we can through education and security appliances.

noinsight · 2 years ago
Actually, arguably Windows has some impressive security features unseen on any other mainstream OS, they're just not used by default and - realistically - would be hard to enable on general purpose / non-corporate computers.

For example, by comparison, Linux is in the stone age here.

Do you even need AV if untrusted code can't run in the first place?

* Application whitelisting - with just bare old AppLocker, Windows can be configured to only allow execution of trusted executables, DLLs and scripts by path, hash or software vendor (digital signature). Now, technically AppLocker is not a security feature, i.e. a hard security boundary.

The next level functionality, Windows Defender Application Control (WDAC) [1], however, is. I believe Microsoft was offering up to a $1M bug bounty for WDAC bypasses?

With WDAC kernel mode code integrity enabled, only trusted digitally signed kernel modules can be loaded into the OS kernel [2]. WDAC user mode code integrity provides the aforementioned protection AppLocker provides.

With AppLocker / WDAC enabled, the OS built-in script interpreters (Windows Script Host, PowerShell) either refuse to execute unsigned scripts completely or operate in restricted mode with reduced functionality.

- By comparison, Linux only has fapolicyd which is only supported on Red Hat and can only rely on path-based rules because binaries are not directly signed on Linux. None? of the common interpreted languages (Python, Perl, Ruby, Bash) on Linux support digitally signed scripts and locking down interpretation.

* Authentication material protection - Windows has Credential Guard [3] for protection of authentication material - Kerberos tickets and other material are placed in a separate container protected by hardware virtualization [2] and accessed via RPC so you can't dump process memory to compromise them. Even kernel level compromise is not enough.

- By comparison, Kerberos tickets on Linux reside as files on disk, SSH user & host keys reside as files on disk and loaded into sshd/gpg-agent memory, x.509 keypairs reside as files on disk & process memory etc etc. Wouldn't it be nice to have them protected somehow? To my knowledge, nothing exists for this on Linux.

[1] WDAC - https://learn.microsoft.com/en-us/windows/security/applicati...

[2] VBS - https://learn.microsoft.com/en-us/windows-hardware/design/de...

[3] Credential Guard - https://learn.microsoft.com/en-us/windows/security/identity-...

ykonstant · 2 years ago
>- By comparison, Kerberos tickets on Linux reside as files on disk, SSH user & host keys reside as files on disk and loaded into sshd/gpg-agent memory, x.509 keypairs reside as files on disk & process memory etc etc. Wouldn't it be nice to have them protected somehow? To my knowledge, nothing exists for this on Linux.

I have always wondered about that; there has to be a more secure control method for those secrets.

lmm · 2 years ago
> Can someone explain to me why the protections that Falcon provides, are not provided by the OS itself?

They are. It doesn't, y'know, do anything. It ticks the box for your auditors and occasionally makes your computers stop running, which is par for the course in regulated environments.

bennyelv · 2 years ago
I was aware of this being the case when dealing with consumers, but had assumed that because B2B contracts are assumed to be between 2 sophisticated parties that there is little legislative protection that could override the terms of the contract.

My understanding of law is generally UK based, but I'm not aware of legislation what would supersede a contract term limiting liability when the event that created the liability was one of general diligence/competence in carrying out the contract rather than relating to health and safety or some other area that is heavily legislated.

For that reason I'm unconvinced on the article's statement that this isn't just a "French Legal System" thing and that the same kind of judgement might be made in other jurisdictions.

consp · 2 years ago
As the article already states, in most jurisdictions you cannot void gross negligence liability in contracts. It will probably come down to that in those jurisdictions.

If they willfully did not implement staged rollouts that look like negligence to me but ianal. You kill canaries for a reason.

smcameron · 2 years ago
HeatrayEnjoyer · 2 years ago
Well for starters it did impact health and safety domains; hospitals and emergency services were severely degraded. There absolutely will be preventable deaths directly traceable to Crowdstrike.
jltsiren · 2 years ago
I think the general idea is that gross negligence is a breach of contract. Every contract implicitly assumes that both parties are making a good faith effort to honor the terms of the contract. If you are not doing that, you may be in breach of contract, and the liability limitations may no longer apply.
dathinab · 2 years ago
not just in France

most(all?) EU have laws which limit how much you can opt out of liability _no matter what you write into a contract_

while I'm not sure about the exact boundaries per country but I'm pretty sure that at least all hospitals, emergency call services etc. can sue for a non-negligible part of the damages that outage caused directly

private people which where harmed by not getting operations done in time most likely can also sue them for the full damages caused to them (through it's hard to assess the damages and it might need to be indirectly by suing the hospital and the hospital sues for more damages)

what you likely will not be able to sue for is the lost opportunity cost, the man power needed to fix it etc.

also my guess is that for a lot of cases which are not as sever as human damages or as indirect as lost opportunity cost a huge factor will depend on the degree of negligence judges believe happened. And here "negligence" isn't limited to the specific change which caused the bug but also if they kept they due diligence in choices of tooling, approaches, business processes etc. to reasonable minimize the risk. (like e.g. was their way of parsing configs inadequate/did it follow industry best practices (IMHO it doesn't seem so), or was it adequate to mark the driver as required to allow boot (else windows would have auto disabled it and then restarted) etc.)