Readit News logoReadit News
Buttons840 · a month ago
I say this often, and it's quite an unpopular idea, and I'm not sure why.

Security researchers, white-hat hackers, and even grey-hat hackers should have strong legal protections so long as they report any security vulnerabilities that they find.

The bad guys are allowed to constantly scan and probe for security vulnerabilities, and there is no system to stop them, but if some good guys try to do the same they are charged with serious felony crimes.

Experience has show we cannot build secure systems. It may be an embarrassing fact, but many, if not all, of our largest companies and organizations are probably completely incapable of building secure systems. I think we try to avoid this fact by not allowing red-team security researches to be on the lookout.

It's funny how everything has worked out for the benefit of companies and powerful organizations. They say "no, you can't test the security of our systems, we are responsible for our own security, you cannot test our security without our permission, and also, if we ever leak data, we aren't responsible".

So, in the end, these powerful organizations are both responsible for their own system security, and yet they also are not responsible, depending on whichever is more convenient at the time. Again, it's funny how it works out that way.

Are companies responsible for their own security, or is this all a big team effort that we're all involved in? Pick a lane. It does feel like we're all involved when half the nation's personal data is leaked every other week.

And this is literally a matter of national security. Is the nation's power grid secure? Maybe? I don't know, do independent organizations verify this? Can I verify this myself by trying to hack the power grid (in a responsible white-hat way)? No, of course not; I would be committing a felony to even try. Enabling powerful organizations to hide their security flaws in their systems, that's the default, they just have to do nothing and then nobody is allowed to research the security of their systems, nobody is allowed to blow the whistle.

We are literally sacrificing national security for the convenience of companies and so they can avoid embarrassment.

pojzon · a month ago
Did you see Google or facebook or Miceosoft customer databases breached ?

The issue is there is too little repercusions for companies making software in shitty ways.

Each data breach should hurt the company approximately to the size of it.

Equifax breach should have collapsed the company. Fines should be in tens of billions of dollars.

Then under such banhammer software would be built correctly, security would becared about, internal audits would be made (real ones) and people would care.

Currently as things stand. There is ZERO reason to care about security.

lr1970 · a month ago
> The issue is there is too little repercusions for companies making software in shitty ways.

The penalty should be massive enough to affect changes in the business model itself. If you do not store raw data it cannot be exfiltrated.

slivanes · a month ago
I’m all for companies to not ignore their responsibility for data management, but I’m concerned that type of punishment could be used as a weapon against competitors. I can imagine that certain classes of useful companies would just not be able to exist. Tricky balance to make companies actually care without crippling insurance.
arvinsim · a month ago
I agree. When it becames penalized by law, project owners/managers won't be tempted to take shorcuts and will have the incentive to give developers more time to focus on security.
Xx_crazy420_xX · a month ago
There is some incentive to leave 0days in customer software, as it creates a commodity to be sold on gray 0day markets. On the other hand, securing your own garden brings less value then covering and deneing that your 'secure' cloud platform was whacked.
conception · a month ago
Microsoft lost their root keys to Azure. ¯\_(ツ)_/¯
reactordev · a month ago
We need both. The allowance by law enforcement to do cyber security as well as engineers not writing shitty software and lax IAM permissions or exposing private keys or the myriad of ways they mess up.
bobmcnamara · a month ago
> Did you see Google or facebook or Miceosoft customer databases breached ?

Are you being facetious? Yes, yes, yes, they have.

Den_VR · a month ago
I’m curious. What do you think about legalizing “hack-back” ?
GlacierFox · a month ago
Didn't Sharepoint get hacked the other day? :S
tempnew · a month ago
Microsoft just compromised the National Nuclear Security Administration last week.

Facebook was breached what last month?

Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task.

All are too big to fail so there is only congress to blame. While people like Rho Khana focus their congressional resources on the Epstein intrigue citizens are having their savings stolen by Indian scammers and there is clearly no interest and nothing on the horizon to change that.

atmosx · a month ago
If companies faced real consequences, like substantial fines from a regulatory body with the authority to assess damage and impose long-term penalties, their stock would take a hit. That alone would compel them to take security seriously. Unfortunately, most still don’t. More often than not, they walk away with a slap on the wrist. If, that.
no_wizard · a month ago
I’ve still got time left on identify theft protection I’ve been given for free due to breaches
kube-system · a month ago
Not all security research is the same. There’s a lot of room for nuance in this discussion.

I think there’s a lot of things that many people would agree should be protected. For instance, people who report vulnerabilities they just happen to stumble upon.

But on the other end of the spectrum, there are a lot of pen testing activities that are pretty likely to be disruptive. And some of them would be disruptive, even on otherwise secure systems, if we gave the entire world carte blanche to perform these activities.

There are certainly some realms of security where technology can solve anything, like cryptographic algorithms. But at the interface of technology and society, security still highly relies on the rule of law and living in a high trust society.

saurik · a month ago
> ...these powerful organizations are both responsible for _____, and yet they also are not responsible, depending on whichever is more convenient at the time...

This pattern comes up constantly, and it is extremely demoralizing.

pengaru · a month ago

  > I say this often, and it's quite an unpopular idea, and I'm not sure why.
  >
  > Security researchers, white-hat hackers, and even grey-hat hackers should have
  > strong legal protections so long as they report any security vulnerabilities
  > that they find.
  >
  > The bad guys are allowed to constantly scan and probe for security
  > vulnerabilities, and there is no system to stop them, but if some good guys
  > try to do the same they are charged with serious felony crimes.
So let me get this straight, you want to give unsuccessful bad actors an escape hatch by claiming white-hat intentions when they get caught probing systems?

doubled112 · a month ago
What about a white hat hacker license? Not sure what the criteria would be, but could it be done?

Then there would be some sort of evidence the guy was a "good guy". Like when a cop shoots your dog and suffers no consequences.

worthless-trash · a month ago
This is a horrifically bad take, I know you probably see it this way because you can't imagine how easy some of these mistakes are, however I can assure you that there are MANY TIMES that I've accidentally found issues with systems.

I do work in security, the average person would write this off as "oh just shitty software" and do nothing about it, however when one know what the error means and you know how the software works, errors are easy to turn into exploitable systems.

I once had a bank account that fucked up data validation because i had '; in the transfer description of 120 characters. Immediately abusable sql injection.

After my first time reporting this OBVIOUS flaw to a bank along with how it can be abused in both database modification and xss injection, I had to visit the local law enforcement with lawyers because they believe that 'hacking' had taken place.

I now report every vuln behind fake emails, on fake systems in non extradition countries accessed via proxy on vpn. Even then I have the legal system attempting to find my real name and location and threaten me with legal action.

Bad actors come from non extradition countries which wouldnt even TALK to you about the problem, You'd just have to accept you get hacked and that is the end of the situation.

Its people like yourself who can't see past the end of their nose to realise where the real threats are. You don't have "it straight".

Buttons840 · a month ago
If we did give bad actors an escape hatch, what harm would it do in a world already filled with untouchable bad actors?
gettingoverit · a month ago
Probably this wouldn't be a problem if Web was somewhat anonymous, so that merely stumbling upon a security issue, or using website in a regular way would not constitute a crime for the lack of the person to put that crime onto.

Also if things stored in those databases weren't plain strings, but tokens (in asymmetric cryptography sense) so that only the service owns it, and in case of a leak user can use it to get a payout from the service, this problem would be solved.

But no business is interested in provably making their users secure, it would be a self-sabotage. It's always just a security theater.

jama211 · a month ago
It’s an interesting point, but doesn’t that open up an easy defense so black hat hackers can hack anything they want in advance and as long as they say they were just “looking for an opening” they’d be legally safe under this scenario? They could plausibly claim they just never found a vulnerability to report, but they could note down anything they notice and then attack who or when they feel like it - or pretend they’re white hat their while career but secretly sell the methods to someone who will. Under the current system, they’re discouraged from doing that.
thatguy0900 · a month ago
I mean, the problem is people will break things. How do you responsibly hack your local electric grid? What if you accidentally mess with something you don't understand, and knock a neighborhood out? How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it?
Buttons840 · a month ago
If a security researcher knocks out the power grid of a town, we should consider ourselves lucky that the vulnerability was found before an opposing nation used it to knock out the power of many towns.
sunrunner · a month ago
> How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it?

Pinky promise?

sublinear · a month ago
If we're strictly talking about software there should be some way to test in a staging environment. Production software that cannot be run this way should be made illegal.
AbstractH24 · a month ago
At what point do we need to treat this data like one’s health data?

The risks associated with medical malpractice certainly slows the pace of innovation in healthcare, but maybe that’s ok.

msgodel · a month ago
The internet is really a lot like the ocean, things left unmaintained on it are swallowed by waves and sea life.

We need something like the salvage law.

Ylpertnodi · a month ago
> I say this often, and it's quite an unpopular idea, and I'm not sure why. > Etc...etc...etc....

Me, neither, if that helps.

bongodongobob · a month ago
No. You cannot come to my home or business while I'm away and try to break in to protect me unless I ask, full stop. Same goes for my servers and network. It's my responsibility, not anyone else's. We have laws in place already for burgers and hackers. Just because they continue to do it doesn't give anyone else the right to do it for the children or whatever reasoning you come up with.
krior · a month ago
But you would like to be notifiedby your neighbours if you have left your window open while away, right? Or are you going to sue them for attempted break-in?

The issue is not that its illegal to put on a white hat, break into the user database and steal 125 million accounts as proof of security issue.

The problem is people getting sued for saying "Hey, I stumbled upon the fact that you can log into any account by appending the account-number to the url of your website.".

There certainly is a line seperating ethical hacking (if you can even call it hacking in some cases) and prodding and probing at random targets in the name of mischief and chaos.

tjwebbnorfolk · a month ago
Adding "full stop" doesn't strengthen your case, it just makes it sound like you are boiling the world down to be simple enough for your case to make any sense.

There are a lot of shades of grey that you are ignoring.

Buttons840 · a month ago
You claim sole responsibility. Do you accept sole legal and financial liability?

I think allowing red-teams to run wild is a better solution, but I can agree with other solutions too.

If those who claim sole responsibility want to be responsible, I'm okay with that too. I really just want us to pick a lane.

So again, are you willing to accept sole legal and financial liability?

cmiles74 · a month ago
It seems like passing legislation that imposes harsher penalties for data breaches is the way to go.
fancyswimtime · a month ago
username checks out
valianteffort · a month ago
> Experience has show we cannot build secure systems

It's an unpopular idea because its bullshit. Building secure systems is trivial and at the skill level of a junior engineer. Most of these "hacks" are not elaborate attacks utilizing esoteric knowledge to discover new vectors. They are the same exploit chains targeting bad programming practices, out of date libraries, etc.

Lousy code monkeys or medicore programmers are the ones introducing vulnerabilities. We all know who they are. We all have to deal with them thanks to some brilliant middle manager figuring out how to cut costs for the org.

9dev · a month ago
That sounds like a perspective from deep in the trenches. A software system has SO many parts, spanning your code, other people’s code, open source software, hardware appliances, SaaS tools, office software, email servers, and also humans reachable via social engineering. If someone makes a project manager click a link leading to a fake Jira login, and the attacker uses the credentials to issue a Jira access token, and uses that to impersonate the manager to create an innocuous ticket, and a low-tier developer introduces a subtle change in functionality that opens up a hole… then you have an insecure system.

This story spans a lot of different concerns, only few of which are related to coding skills. Building secure software means defending in breadth, always, not fucking up once, against an armada of bots and creative hackers that only need to get lucky once.

darzu · a month ago
Take a broader view of what "building secure systems" means. It's not just about the code being written by ICs but about the business incentives, tech choices of leadership, the individual ways execs are rewarded, legacy realities, interactions with other companies, and a million other things. Our institutions are a complex result of all of these forces. Taken as a whole, and looking at the empirical evidence of companies and agencies frequently leaking data, the conclusion "we cannot build secure systems" is well founded.
KaiserPro · a month ago
> Building secure systems is trivial

I'd suggest you try and build a secure system for > 150k employees before you make sweeping statements like that.

tdrz · a month ago
Sometimes it is the management that doesn't understand anything. In their perspective, security doesn't improve the bottom line.

I worked for an SME that dealt with some sensitive customer data. I mentioned to the CEO that we should invest some time in improving our security. I got back that "what's the big deal, if anyone wants to look they can just look..."

plst · a month ago
Looking at the number of already discovered vulnerabilities in popular applications, I would say it's actually impossible to build secure systems right now. Even companies that are trying are failing. IMO it's still way too easy to introduce a vulnerability and then miss it in both review and pentests. We need big changes in all parts of the software buliding and maintaining process. Probably no one will like that, because we are still in "move fast and break things" software development age.
sublinear · a month ago
This is true, but what's even more interesting is all the things that had to fail long before you had a shop full of monkeys.
bloqs · a month ago
i used to agree with you but i feel its naive. incompetence is always guaranteed
Buttons840 · a month ago
You're saying that creating secure systems is easy.

I'm not sure which is worse:

1) Creating secure systems is hard, and we often fail at it.

2) Creating secure systems is easy, and we often fail at it.

I don't know which is worse, but I know for sure we often fail at it.

sugarpimpdorsey · a month ago
Do you think we should have strong legal protections for people who go around your neighborhood trying unlocked car doors and opening front doors (with a backpack full of burglary tools) and when confronted claim they're uh doing it for your security?
xboxnolifes · a month ago
The great thing about analogies is that they're just analogies. We can have different laws for different things. Cybersecurity vs physical security.
Buttons840 · a month ago
That's a failed analogy I won't entertain.

You're trying to say companies should have sole responsibility over their systems. I say, let them have sole legal and financial liability as well then.

tempnew · a month ago
We do? People can go into any neighborhood they want. They can’t break laws, but the law allows them to walk around and look for open windows, knock on front doors, take photos, scan WiFi bssid, note cars and license plate info, etc…

The crime here is the tech. The companies aren’t to blame. Programmers and tech companies are. If there was no internet or “tech industry” we’d all be so much better off it’s painful to even contemplate.

Sytten · a month ago
That comment came straight from the 2001. Seriously the world has moved on from hackers == bad, but the legislation has not and it is time it changes.
slashdev · a month ago
All these endless data breaches could be reduced if we fixed the incentives, but that's difficult. We could never stop it, because humans make mistakes, and big groups of humans make lots of mistakes. That doesn't mean we shouldn't try.

It seems to me a parallel path that should be pursued is to make the impact less damaging. Don't assume that things like birth dates, names, addresses, phone numbers, emails, SSNs, etc are private. Shut down the avenues that people use to "steal identities".

I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved. If a bank gives a loan to you under my name, it should be their problem, not mine. It would go away practically overnight as a problem if that were changed. Companies would be strict about verifying people, because otherwise they'd lose money. Incentives align.

Identify theft is not the only issue with data leaks / breaches, but it seems one of the more tractable.

DicIfTEx · a month ago
> I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved.

You may enjoy this sketch: https://www.youtube.com/watch?v=CS9ptA3Ya9E

rendaw · a month ago
Okay, I'm inclined to agree here, but what I don't see addressed is: If you set up an account with a username and password, then write it down on a slip of paper, and then drop that in a cafe, and someone else logs in as you and drains your account, is the bank liable for that too? Are all services with logons? But that looks identical to identity theft in a lot of ways.

If bank mandated security controls are breached, or they don't provide adequate controls, I feel like that that's on them. But if they've done their part and you've been irresponsible then that's on you. But where's the dividing line? And saying the banks have more responsibility can also justify more biometrics and surveillance.

Is the differentiating factor here that the bank (or whatever) is allowing access with insecure credentials (name, date of birth, phone number) instead of the primary credentials?

MichaelZuo · a month ago
It is really strange that is not already the case.
slashdev · a month ago
That was hilarious, thanks for sharing!
JumpCrisscross · a month ago
> these endless data breaches could be reduced if we fixed the incentives, but that's difficult

It’s honestly unclear if the damage from data breaches exceeds the cost of eliminating it. The only case where I see that being clear is in respect of national security.

ponector · a month ago
>> if the damage from data breaches exceeds the cost of eliminating it.

Definitely not. Damage is done to customers but costs to eliminate are on the company. Why should company invest more if there are no meaningful consequences for them?

AlotOfReading · a month ago
The more important point is that the people who would have to pay to avoid data breaches (companies) are not the ones who suffer when they happen (the public). It's the same problem as industrial pollution.
afarah1 · a month ago
The solution already exists: MFA and IdP federation.

One factor you know (data) and the other you posess, or you are (biometrics).

IdP issues both factors, identification is federated to them.

Kind of happens when you are required to supply driver's license, which technically you own and is federated id if checked in government system, but can be easily forged with knowledge factors alone.

Unfortunately banks and governments here use facial recognition for the second factor, which has big privacy concerns, and the tendency I think will be federal government as sole IdP. Non-biometroc factors might have practical difficulties at scale, but fingerprint would be better than facial. It's already taken in most countries and could be easily federated. Not perfect but better than the alternatives imo.

SoftTalker · a month ago
I'm unconvinced that biometrics are a good approach. You can't change them if a compromise is discovered.
eptcyka · a month ago
So what? My data will still get sold online and then agencies/businesses will take advantage of it to do differential pricing. 2fa does not solve the problem of data leaks.
NoPicklez · a month ago
I don't see the term stealing identity as something that implies I have done something wrong to allow it. If you have something stolen from you it doesn't mean you have done something wrong to allow that. If someone broke into a bank vault and stole your money it wouldn't be considered your fault.

The challenge in cyber security is that the person potentially stealing your identity lives on the other side of the world and that's why the focus is on the end user to be as secure as they can. But if you have something stolen from you, you are still the victim.

AuryGlenz · a month ago
Yeah, it doesn’t imply you did something wrong. However, the “attack” is against the institution they get a loan from, not you. You didn’t give them money under false pretenses. You had absolutely nothing to do with it.
Phui3ferubus · a month ago
> All these endless data breaches could be reduced if we fixed the incentives, but that's difficult.

EU fixed the incentives with GPRS and DORA, that was the easy part. In theory company that doesn't follow "secure by design" will end up bankrupt by (revenue dependent) fines. In practice the enforcement is lack luster, courts are lenient and international cases take ages, even if both countries are in EU.

giantfrog · a month ago
This will never, ever, ever stop happening until executives start going bankrupt and/or to jail for negligence. Even then it won’t stop, but it would at least decrease in frequency and severity.
SoftTalker · a month ago
Unless there is willfull negligence (very difficult to prove) or malicious behavior I don't think putting people in jail will help. Most of this stuff happens by accident not by intent.

Financial consequences to the company might be a deterrent, of course then you're dealing with hundreds or thousands of people potentially unemployed because the company was bankrupted by something as simple as a mistake in a firewall somewhere or an employee falling victim to a social engineering trick.

I think the path is along the lines of admitting that cloud, SaaS and other internet-connected information systems cannot be made safe, and dramatically limiting their use.

Or, admitting that a lot of this information should be of no consequence if it is exposed. Imagine a world where knowing my name, SSN, DOB, address, mother's maiden name, and whatever else didn't mean anything.

DanHulton · a month ago
Imagine using this defence with regards to airline crashes. "The crashes happen by accident not by intent" would be a clearly ludicrous defence, as it ought to be here as well.

If we were serious about preventing these kinds of things from happening, we could.

fn-mote · a month ago
> Most of this stuff happens by accident not by intent.

Consider the intent of not hiring enough security staff and supporting them appropriately. It looks a lot like an accident. You could even say it causes accidents.

Ekaros · a month ago
Remove limited liability. Have the stock holder bear full economic cost of the victims without any limit. They want to profit, they take full risk with all of their property.
spacebanana7 · a month ago
This can't be done in the modern financial system, I'd recommend holding senior execs and the members of the board responsible instead.

Shareholders may well be based overseas so it'd be very difficult to actually enforce the fines. They might also use overseas limited liability investment corporations, so fines would just bankrupt those companies leaving the actual shareholders never falling below zero.

There's also the political issues that'd come from potentially giving fines to millions of people because their pension funds invested in a company that had a data breach.

lynx97 · a month ago
Haha, I still vividly remember how they were trying to make me believe that GDPR is going to a big hammer because it will finally make executives liable for breaches. I silently laughed back then. I am still laughing.

I should probably clarify: There are two types of people that climed that back then. Those trying to gaslight us, and those naiv enough to actually believe the gaslighting. Severe negligence has to be proofen, and that is not easy, and there is a lot of wiggle room in court. Executives being liable for what they did during their term is just not coming, sorry kids.

time4tea · a month ago
Mandatory £1000 fine per record lost. Would be company-terminal for companies with millions of customers - and thats right. Right now it's just cheaper to not care, then send a trite apology email when all the data inevitably gets stolen.

The status quo, nobody gives a crap, with the regulators literally doing nothing, cannot continue. In the UK, the ICO is as effective as Ofwat. (The regulator that was just killed for being pointlessly and dangerously usless)

(Edit: fix autocorrect)

grapescheesee · a month ago
Mandatory amount paid directly to the customer of record, instead of fractions of a cent on the dollar, in year long class action settlements might help the disenfranchised 'customers'.
sunrunner · a month ago
> Would be company-terminal

What happens to customers of the affected company in this case? Does this not now pass on a second problem to the people actually affected?

unsupp0rted · a month ago
Would be national economy terminal too
amai · a month ago
Actually Allianz offers an insurance against cyberattacks like this: https://www.allianz.de/aktuell/storys/cyberschutz-knoten-im-...
7373737373 · a month ago
Insurance is part of the problem - companies prefer to insure themselves rather than employ and support the research and development of secure software. As long as this is the more economical thing to do, nothing will change.
rswail · a month ago
It is in the interest of Allianz and other insurers to employ and support R&D of secure software though.

Deleted Comment

ok123456 · a month ago
Good to see the contractually required endpoint protection was working.
ofjcihen · a month ago
That’s partially due to SF devs not knowing enough about the product but also due to Salesforce treating security as an afterthought. For a poorly configured implementation it takes 2 web requests as an unauthenticated user to know all of the data you can pull down and then do it. Don’t even get me started on the complete lack of monitoring. I basically had to design an entire security monitoring setup outside of Salesforce using their (absolutely awful) logs to get anything close to usable. Edit: here’s a guide someone wrote. https://www.varonis.com/blog/misconfigured-salesforce-experi... Seriously, you can automate this and then throw it at the end of recon to find SF sites. I’ve done it.
jmkni · a month ago
> “On July 16, 2025, a malicious threat actor gained access to a third-party, cloud-based CRM system used by Allianz Life,” referring to a customer relationship management (CRM) database containing information on its customers.

So who the hell was the "third-party, cloud-based CRM system"?

ofjcihen · a month ago
Another article mentioned Salesforce which has a knack for being poorly secured on the data owners side.

I’ve got another reply here with details but suffice it to say misconfigured Salesforce tenants are all over the internet.

eclipticplane · a month ago
Even if SFDC is configured correctly, any sufficiently large or old instance of SFDC may have dozens of other systems plugged into it. Many of which get default access to everything because SFDC security and permission configuration is so byzantine.
rr808 · a month ago
Google published this about Salesforce a few weeks back. https://cloud.google.com/blog/topics/threat-intelligence/voi...
milesskorpen · a month ago
Does it matter? Wasn't a technical breach of their systems, but instead social engineering.
poemxo · a month ago
If a cloud-based system doesn't support technologies that deter social engineering, it's still a problem. Some login portals to check your credit history don't even support 2FA.

So I think it matters, I think access systems should be designed with a wider set of human behaviors in mind, and there should be technical hurdles to leaking a majority of customers' personal information.

politelemon · a month ago
It matters. That's often a generic phrasing used to make it look like it was a partner's fault. But very often it is simply a platform that was managed by and configured by the company itself, which would mean more than just social engineering. Take a look at the language used in other breaches and it's very similarly veiled.
MontagFTB · a month ago
Depending on the CRM, is this not a HIPAA violation?
marcusb · a month ago
Why would it be? Is Allianz Life a covered entity? If so, why would it depend on the specific CRM being used?
nothercastle · a month ago
The punishment for poor data security is so low it’s not worth paying for it in most companies. And of course the government makes it nearly impossible to change your ssn yet still uses it as a means of verifying so almost everyone is exposed by now.