1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
In other industries there are professional engineers. People who have a legal accountability. I wonder if the CS world will move that way, especially with AI. Since those engineers are the ones who sign things off.
For people unfamiliar, most engineers aren't professional engineers. There are more legal standards for your average engineer and they are legally obligated to push back against management when they think there's danger or ethics violations, but that's a high bar and very few ever get in legal trouble, only the most egregious cases. But professional engineers are the ones who check all the plans and the inspections. They're more like a supervisor. Someone who can look at the whole picture. And they get paid a lot more for their work but they're also essential to making sure things are safe. They also end up having a lot of power/authority, though at the cost of liability. Think like how in the military a doctor can overrule all others (I'm sure you've seen this in a movie). Your average military doctor or nurse can't do that but the senior ones can, though it's rare and very circumstantial.
You'd be surprised how many SE's would love for this to happen. The biggest reason, as you said, being able to push back.
Having worked in low-level embedded systems that could be considered "system critical", it's a horrible feeling knowing what's in that code and having no actual recourse other than quitting (which I have done on few occasions because I did not want to be tied to that disaster waiting to happen).
I actually started a legal framework and got some basic bills together (mostly wording) and presented this to many of my colleagues, all agreed it was needed and loved it, and a few lawyers said the bill/framework was sound .. even had some carve-outs for "mom-n-pops" and some other "obvious" things (like allowing for a transition into it).
Why didn't I push it through? 2 reasons:
1.) I'd likely be blackballed (if not outright killed) because "the powers that be" (e.g. large corp's in software) would absolutely -hate- this ... having actual accountability AND having to pay higher wages.
2.) Doing what I wanted would require federal intervention, and the climate has not been ripe for new regulations, let alone governing bodies, in well over a decade.
Hell, I even tried to get my PE in Software, but right as I was going to start the process, the PE for Software was removed from my state (and isn't likely to ever come back).
I 100% agree we should have even a PE for Software, but it's not likely to happen any time soon because Software without accountability and regulation makes WAY too much money ... :(
I don’t think the current cost structure of software development would support a professional engineer signing their name on releases or the required skill level of the others to enable such …
We’d actually have to respect software development as an important task and not a cost to be minimized and outsourced.
In many countries you are only allowed to call yourself a Software Engineer if you actually have a professional title.
It is countries like US where anyone can call themselves whatever they feel like that have devalued our profession.
I have been on the liability side ever since, people don't keep broken cars unless they cannot afford anything else, software is nothing special, other than lack of accountability.
We check the output of engineers tjats what infra audits and certs are for. We basically tell industry if you want to waste your money on poor engineers whose output doesn’t certify go ahead.
you could do that with civil engineering. anyone gets to design bridges. bridge is done we inspect, sorry x isn’t redundant your engineering is bad tear it down.
A lot of responses below talking about what a 'certified' or 'chartered' engineer should be able to do.
I thought it would be noteworthy to talk about another industry, accountancy. This is how it works in the UK, but it is similar in other countries. They are called 'Chartered Accountants' here, because their institute has a Royal Charter saying they are the good guys.
To become a Chartered Accountant has no prerequisites. You 'just' have to complete the qualification of the institute you want to join. There are stages to the exams that prior qualifications may gain you exemptions from. You also have to log practical experience proving you are working as an accountant with adequate supervision. It takes about 2-3 years to get the qualification for someone well supported by their employer and with sufficient free time. Interestingly many Accountants are not graduates, and instead took technician level qualifications first, often the Association of Accounting Technicians (AAT). The accounting graduates I have interviewed wasted 3 years of their lives...
There are several institutes that specialise in different areas. Some specialise in audit. One specialises in Management Accounting (being an accountant at a company really). The Management accountants one specifically prohibits you from doing audit without taking another conversion course. All the institutes have CPD requirements (and check) and all prohibit you from working in areas that you are not competent, but provide routes to competency.
There are standards to follow, Generally Accepted Accounting Practice GAAP, UK Financial Reporting Standards FRS and the International equivalent IFRS. These cover how Financial Statements are prepared. There are superate standards setting bodies for these. There are also a set of standards that cover how an audit must be done. Then there is tax law. You are expected to know them for any area you are working in. All of these are legally binding on various types of corporation. See how that switches things around? Accountants are now there to help the company navigate the legal codes. The directors sign the accounts and are liable for misstatements, that encourages them to have a director who is an accountant...an audit committee etc.
How does that translate to software?
There are lots of standards, NIST, GDPR, PCI, some of which are legally or contractually binding. But how do I as a business owner know that a software engineer is competent to follow them. Maybe I am a diving company that wants a website. How do I know this person or company is competent to build it? It requires software engineers with specific qualifications that say they can do it, and software engineers willing to say, 'I'm sorry I am not able to work in this field, unless I first study it'.
Regarding your 2), in other industries and engineering professions, the architect (or civil engineer, or electrical engineer) who signed off carries insurance, and often is licensed by the state.
I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet, but I often wonder if we should require some sort of certification and insurance for large businesses sites that handle personal info or money. There'd be a Certified Professional Software Engineer that has to sign off on it, and thus maybe has the clout to push back on being forced to implement whatever dumb idea an MBA has to drive engagement or short-term sales.
Maybe. Its not like its worked very well lately for Boeing or Volkswagen.
> I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet
FWIW there is no barrier like that for your physical engineers. Even though, as you note, professional engineers exist. Most engineers aren't professional engineers though, and that's why the barrier doesn't exist. We can probably follow a similar framing. I mean it is already more common for licensing to be attached to even random software and that's not true for the engineer's equivalents.
Oh there have been many cases where software engineers who are not professional engineers with the engineering mafia designation get sidelined by authorities for lacking standing. We absolutely should get rid of the engineering mafias and unions.
It's kinda wild that you don't need to be a professional engineer to store PII. The GDPR and other frameworks for PII usually do have a minimum size (in # of users) before they apply, which would help hobbyists. The same could apply for the licensure requirement.
But also maybe hobbyists don't have any business storing PII at scale just like they have no business building public bridges or commercial aircraft.
There are jurisdictions (and cultures) where truth is not an absolute defence against defamation. In other words, it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet. The nail that sticks out gets hammered down.
Given that this is Malta in particular, the author probably wants to avoid going there for a bit. It's a country full of organized crime and corruption where people like him would end up with convenient accidents.
> it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet.
At least in the US there is a path of escalation. Usually if you have first contacted those who have authority over you then you're fine. There's exceptions in both directions; where you aren't fine or where you can skip that step. Government work is different. For example Snowden probably doesn't get whistleblower protection because he didn't first leak to Congress. It's arguable though but also IANAL
> it's one thing to disclose the issue to the authorities
That's not how any of this works. You are basically arguing for the right to hide criminal actions. Filing with the CSIRT is the only legal action for the white hat to take. This is explicitly by design. Complaining about it is like complaining the police arrested you for a crime you committed.
> If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper
To match this metaphor to TFA, the architect has to break in to someone else's apartment to prove there's a flaw. IANAL but I'm not positive that "I'm an architect and I noticed a crack in my apartment, so I immediately broke in to the apartments of three neighbours to see if they also had cracks" would be much of a defence against a trespass/B&E charge.
Another missing link is here is the stock price relationship to security vulnerability history of the corporation. Somehow, I don't know how, but somehow stock prices should reflect the corporation's social responsibility posture, part of which is information security obviously.
> companies should be categorically required to have an cyber audit
I work with a firm that has an annual pen test as part of its SOC2/GDPR/HIPAA audit, and it's basically an exercise in checking boxes. The pen test firm runs a standard TLS test suite, and a standard web vulnerability test suite, and then they click buttons for a while...
The pen test has never found any meaningful vulnerabilities, and several times drive-by white hats have found issues immediately after the pen test concluded
I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.
I guess I should feel lucky they didn't try to have me criminally prosecuted.
During a property search for rentals in the UK I created a throwaway alias email (to my regular account) as I did not really trust them with my data.
This was not for those requiring me to provide credit check papers and name of children (!! yes, you read it right, name of children!) at the very first contact in their web form just to start conversation about if there is viewing ability or not, and then perhaps schedule one. No. Those were avoided completely (despite the desperate property market for renters, I am not that desperate: eventually we left the UK in a big part because of property troubles). Two of those were reported to the relevant authority (one case got confirmed after several months, but still pending after more than a year. The other sank, apparently. My trust in the UK institutions is not elevated). There were more than two requiring full set of data on the prospective viewing candidate.
The throwaway email was for the ""reliable"" ones. The trusted names. Or those without over-reaching data collection (one big name, Cheffin, one of the reported one, had over-reaching habit).
Having a throwaway alias proved benefitial. From zero spam to my email suddenly spam started to arrive with about 4 / week frequency. Kept coming until the alias got disabled. Cannot tell which was the culprit, only have a shortlist based on timing. But that never ever elsewhere used email somehow got to fraudster elements from the few UK property agent organizations I contacted. In very shor time (few weeks).
I've had multiple "big companies" leak my randomly generated email addresses. I create a unique one for each such account, like say my airline frequent flyer account for delta, and I've had several of those leak.
blah1381812301.318719@somedomain.com would never be guessed.
A few ways I've heard about - DuckDuckGo.com has a system that generates a random email address on their domain where you can request "a new email address" whenever you need one; you request a new alias and they create a permanent mapping to your real address from that new address. Then mail sent to say Foo-Bar-Hotdog@duck.com goes to you, duck remembers the mapping that this goes to your address. You can reply back and duck handles the anon mapping.
Or you can have a catchall email address on your own domain, where anything sent to any alias on your domain gets forwarded to your own address. Then hamburger@myDomain.com and mcdonalds@myDomain.com goes to your real private address. you don't have to set it up. Anytime you join a new service, say reddit, you tell them your address is "reddit@myDomain.com".
All of these have a level of pain associated with them. And they aren't that private. The government could no doubt get a court order to pierce the obscured email addresses.
There's proton email and many others. All of these are too painful for most people.
I have wondered if people who want to be really secret set up a chain of these anon mail forwarding systems.
Theoretically, the easiest way is to use a sub address (more commonly/colloquially known as email aliases or plus addresses, they're described in RFC 5233). You should be able to add a separator character (usually a plus, sometimes other characters instead/in addition) and arbitrary text to your email address, i.e. "myemail+somecompany@example.com" should route to "myemail@example.com"
In practice, this works about 95-99% of the time. Some websites will refuse the + as an invalid special character, and the worst of the worst will silently strip it before persisting it, and may or may not strip it when you input your email another time (such as when you're logging in or recovering your password).
I also suspect spammers strip out subaddresses frequently, very little of the spam I receive includes the subaddress.
So the only 100% reliable way is to use your own domain, but you don't need to run your own custom mail server
Proton let's me bring my own subdomain for those random emails and does a pretty good job of tracking which email is given to whom, and also supports hiding your email even if you want to initiate the email contact, not just reply (plus scheme in mail address doesn't allow this). Otherwise you can also use their domain too, to stay fully anonymous.
If you’re on Gmail, there’s “plus addressing” - this allows you to append any term after your email - and then sort accordingly.
So if your Gmail is foo.bar@gmail.com you can use foo.bar+servicename@gmail.com and the mail will still end up in your mailbox. Then you can create a rule that sorts incoming mails accordingly.
well, it is. quick search revealed a name of a certain big player, although there are some other local companies whose policies can be extended to "extreme sports"
If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
Vulnerability Researcher here… Unless your target has a security bounty process or reward; leave them alone. You don’t pentest a company without a contract that specified what you can and can’t test. Although I would personally appreciate and thank a well meaning security researchers efforts most companies don’t. I have reported 0days for companies that HAVE bounties and they still tried to put me in hot water over disclosure.. Not worth the risk these days.
We had a situation in Sweden when a person found that if you remove a part of the url (/.../something -> /.../) for a online medical help line service, they got back a open directory listing which included files with medical data of other patients. This finding was then sent to a journalist that contacted the company and made a news article of it. The company accused the tipster and journalist for unlawful hacking and the police opened a case.
But was it? Is it pen testing to remove part of an URL? People debated this question a bit in articles, but then the case was dropped. The line between pen testing and just normal usage of the internet is not a clear line, but it seems that we all agree that there is a line somewhere and that common sense should guide us in some sense.
You walk past a ministry office and notice that there is nobody at the door checking people entering, you walk in, you find an office door open, many binders on the shelves, nobody present. You read through the binders, pull out the drawers and see private info etc. You then walk out and send a mail about this. What do you think is going to happen?
This dive instructor was using this insurance company for his clients, and thus had a responsibility to prevent any known risk (data privacy loss in this case).
So he had two options: take his clients and his business to another insurer (and still inform all his current and previous clients about their outstanding risk), or try to help the insurer resolve the risk.
Good guideline advice but it seems you didn't read the article. Their personal data was at risk here. Leaving them alone would very likely result in a breach of this person's data. Both he and you have an ethical responsibility to at minimum notify the business of this problem and follow up with it.
> And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It's the response that tells you everything about an organization's security culture.
See. The moral of the story is that the entity care more about their face than the responsibility to fix the bug, that's the biggest issue.
He also pointed out bugs do happens and those are reasonable, and he agreed to expose them in an ethical manner -- but the goodwill, no matter well or ill intentioned, those responses may not come with the same good tolerations, especially when it comes to "national" level stuff where those bureaucrats knows nothing about tech but they knew it has political consequences, a "deface" if it was exposed.
Also, I happened to work with them before and know exactly why they have a lot of legal documents and proceedings, and that's because of bureaucracy, the bad kind, the corrupt kind of bureaucracy such that every wrong move you inflicted will give you huge, if not capitcal punishment, so in order to protect their interest, they rather do nothing as it is unfortunately the best thing. The risk associated of fixing that bug is so high so they rather not take it, and let it rot.
There's a lot of system in Hong Kong that is exactly like that, and the code just stay rotten until the next batch of money comes in and open up new theatre of corruption. Rinse and repeat
AFAIK, what this dude did - running a script which tries every password and actually accessing personal data of other people – is illegal in Germany. The reasoning is, just because a door of a car which is not yours is open you have no right to sit inside and start the motor. Even if you just want to honk the horn to inform the guy that he has left the door open.
This isn't directly applicable to your point, but I need to correct this. They weren't guessing tons of passwords, they were were trying one password on a large number of accounts.
For clarification, here's the actual quote from the article describing the process:
> I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after.
No notion of a script, "every password" out of a set of a single default password may be open to interpretation, no mention of data downloads (the wording suggests otherwise), no mention of actual number of accesses (the text suggest a low number, as in "minimum access necessary to confirm the scope").
Still, some data was accessed, but we don't know to what extent and what this actually was, based on the information provided in the article. There's a point to be made about the extent of any confirmation of what seems to be a sound theory at a given moment. But, in order to determine whether this is about a stalled number generator or rather a systematic, predictable scheme, there's probably no way around a minimal test. We may still have a discussion, if a security alert should include dimensions like this (scope of vulnerability), or should be confined to a superficial observation only.
> That's it. No rate limiting. No account lockout.
To me, if he confirmed that there’s no rate limiting on the auth API, this implies a scripted approach checking at least tens (if not more) of accounts in rapid succession.
Maybe the law should be changed then. The companies that have this level of disregard for security in 2026 are not going to change without either a good samaritan or a data breach.
You don't need to retrieve other people's data to demonstrate the vulnerability.
It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization
If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice
To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example
It's not necessarily just Germany. Lots of countries have laws that basically say "you cannot log in to systems that you (should) know you're not allowed to". Technical details such as "how difficult is the password to guess" and "how badly designed is the system at play" may be used in court to argue for or against the severity of the crime, but hacking people in general is pretty damn illegal.
He also didn't need to run the script to try more than one or maybe two accounts to verify the problem. He dumped more database than he needed to and that's something the law doesn't particularly like.
People don't like it when they find a well-intentioned lock specialist standing in their living room explaining they need better locks. Plenty of laws apply the same logic to digital "locksmiths".
In reality, it's pretty improbable in most places for the police to bother with reports like these. There have been cases in Hungary where prestigious public projects and national operations were full of security holes with the researchers sued as a result, but that's closer to politics than it is to normal police operations.
No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.
Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.
This is exactly what I thought. The person did something illegal by accessing random accounts and no explanation makes this better. Could have asked his diving students for their consent, could have asked past students for their consent to access their accounts - but random accounts you cannot access.
Since this is a Maltese company I would assume different rules apply, but no clue how this is dealt with in Malta.
How the company reacted is bad, no question, but I can’t glance over the fact how the person did the initial „recon“.
It's illegal in the US, too. This is an incredibly stupid thing to do. You never, ever test on other people's accounts. Once you know about the vulnerability, you stop and report it.
Knowing the front door is unlocked does not mean you can go inside.
Don't comment on topics you know nothing about. Nothing this guy did is illegal in the US. Everything this guy did followed standard procedures for reporting security issues. The company apparently didn't understand anything about running a secure software operation and did everything wrong. And there in lies the problem. Without civil penalties for this type of bad behavior, then it will continue. In the US, a lawyer doing this would risk disbarment as this type of behavior dances on the edge of violating whistleblower laws.
Last year I found a vulnerability in a large annual event's ticket system, allowing me to download tickets from other users.
I had bought a ticket, which arrived as a link by email. The URL was something like example.com/tickets/[string]
The string was just the order number in base 64. The order number was, of course, sequential.
I emailed the organizer and the company that built the order system. They immediately fixed it... Just kidding. It's still wide open and I didn't hear anything from them.
I'm waiting for this year's edition. Maybe they'll have fixed it.
Hey TFA, other people have gone to prison for finding monotonic user/account IDs and _testing_ their hunch to see if it's true. See, doing that puts you at great risk of violating the CFAA. Basically, the moment you knew they were allocating account IDs monotonically and with a default password was the moment you had a vulnerability that you could report without fear of prosecution, but the moment you tested that vulnerability is the moment you may have broken the law.
Writing about it is essentially confessing. You need a lawyer, and a good one. And you need to read about these things.
Because Americans can never comprehend of literally anywhere on earth existing. Genuinely if any other place on earth tried this crap…the Americans would lose their minds.
IANAL but the law in Germany is basically the same in this case, accessing data that's meant to be protected and not intended for you is is illegal. It depends somewhat on the interpretation of what "specifically protected" ("besonders gesichert") means. https://www.gesetze-im-internet.de/stgb/__202a.html
What is CFAA? I couldn't find anything about it in EU or Malta. Is it something in India or China? Or Japan? Hmm, maybe I'm missing another country.. Australia?
He had enough proof, his own students, who assumingly agreed. And in case the company still pretends there is no problem you could still crawl their entire user base...
> Basically, the moment you knew they were allocating account IDs monotonically and with a default password was the moment you had a vulnerability that you could report without fear of prosecution
That logic is garbage and assumes there is some arbitrary point at which a user should magically know the difference between a few IDs happening to be near each other versus a system wide problem. The law would use the interpretations of "knowingly", "intent" and in this case "reasonable".
they you could kick him out of the org for "creating a bogus account" - "our company isn't bad, you're the bad actor". The bad company he was try get to fix their thing didn't behave properly, end of story.
This happens over and over again because for so many companies their natural thing is to hid any problem and threaten to sue anyone who discloses. Software problems have broken that typical behavior, to some extent.
I salute the author of this post who dared to do the right thing. I hope the company comes to their senses and doesn't try to punish the diving instructor. Over and over companies have tried this same "attack the problem reporter" strategy when software problems are revealed.
I find it interesting how American-accented people publish on social media how to access non-linked FBI files related to the Epstein leak, by updating a URL.
> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
It's clear that the intentions of the insurance company are selfish and they want to gain leverage over the reporter. Even if the reporter managed to add a clause about data deletion, the company could still make the reporter's life hell with the remaining clauses that were signed. This is not worth the risk.
1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
For people unfamiliar, most engineers aren't professional engineers. There are more legal standards for your average engineer and they are legally obligated to push back against management when they think there's danger or ethics violations, but that's a high bar and very few ever get in legal trouble, only the most egregious cases. But professional engineers are the ones who check all the plans and the inspections. They're more like a supervisor. Someone who can look at the whole picture. And they get paid a lot more for their work but they're also essential to making sure things are safe. They also end up having a lot of power/authority, though at the cost of liability. Think like how in the military a doctor can overrule all others (I'm sure you've seen this in a movie). Your average military doctor or nurse can't do that but the senior ones can, though it's rare and very circumstantial.
Having worked in low-level embedded systems that could be considered "system critical", it's a horrible feeling knowing what's in that code and having no actual recourse other than quitting (which I have done on few occasions because I did not want to be tied to that disaster waiting to happen).
I actually started a legal framework and got some basic bills together (mostly wording) and presented this to many of my colleagues, all agreed it was needed and loved it, and a few lawyers said the bill/framework was sound .. even had some carve-outs for "mom-n-pops" and some other "obvious" things (like allowing for a transition into it).
Why didn't I push it through? 2 reasons:
1.) I'd likely be blackballed (if not outright killed) because "the powers that be" (e.g. large corp's in software) would absolutely -hate- this ... having actual accountability AND having to pay higher wages.
2.) Doing what I wanted would require federal intervention, and the climate has not been ripe for new regulations, let alone governing bodies, in well over a decade.
Hell, I even tried to get my PE in Software, but right as I was going to start the process, the PE for Software was removed from my state (and isn't likely to ever come back).
I 100% agree we should have even a PE for Software, but it's not likely to happen any time soon because Software without accountability and regulation makes WAY too much money ... :(
We’d actually have to respect software development as an important task and not a cost to be minimized and outsourced.
It is countries like US where anyone can call themselves whatever they feel like that have devalued our profession.
I have been on the liability side ever since, people don't keep broken cars unless they cannot afford anything else, software is nothing special, other than lack of accountability.
you could do that with civil engineering. anyone gets to design bridges. bridge is done we inspect, sorry x isn’t redundant your engineering is bad tear it down.
I think this is mostly a US thing.
I thought it would be noteworthy to talk about another industry, accountancy. This is how it works in the UK, but it is similar in other countries. They are called 'Chartered Accountants' here, because their institute has a Royal Charter saying they are the good guys.
To become a Chartered Accountant has no prerequisites. You 'just' have to complete the qualification of the institute you want to join. There are stages to the exams that prior qualifications may gain you exemptions from. You also have to log practical experience proving you are working as an accountant with adequate supervision. It takes about 2-3 years to get the qualification for someone well supported by their employer and with sufficient free time. Interestingly many Accountants are not graduates, and instead took technician level qualifications first, often the Association of Accounting Technicians (AAT). The accounting graduates I have interviewed wasted 3 years of their lives...
There are several institutes that specialise in different areas. Some specialise in audit. One specialises in Management Accounting (being an accountant at a company really). The Management accountants one specifically prohibits you from doing audit without taking another conversion course. All the institutes have CPD requirements (and check) and all prohibit you from working in areas that you are not competent, but provide routes to competency.
There are standards to follow, Generally Accepted Accounting Practice GAAP, UK Financial Reporting Standards FRS and the International equivalent IFRS. These cover how Financial Statements are prepared. There are superate standards setting bodies for these. There are also a set of standards that cover how an audit must be done. Then there is tax law. You are expected to know them for any area you are working in. All of these are legally binding on various types of corporation. See how that switches things around? Accountants are now there to help the company navigate the legal codes. The directors sign the accounts and are liable for misstatements, that encourages them to have a director who is an accountant...an audit committee etc.
How does that translate to software?
There are lots of standards, NIST, GDPR, PCI, some of which are legally or contractually binding. But how do I as a business owner know that a software engineer is competent to follow them. Maybe I am a diving company that wants a website. How do I know this person or company is competent to build it? It requires software engineers with specific qualifications that say they can do it, and software engineers willing to say, 'I'm sorry I am not able to work in this field, unless I first study it'.
I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet, but I often wonder if we should require some sort of certification and insurance for large businesses sites that handle personal info or money. There'd be a Certified Professional Software Engineer that has to sign off on it, and thus maybe has the clout to push back on being forced to implement whatever dumb idea an MBA has to drive engagement or short-term sales.
Maybe. Its not like its worked very well lately for Boeing or Volkswagen.
https://ij.org/press-release/oregon-engineer-makes-history-w...
But also maybe hobbyists don't have any business storing PII at scale just like they have no business building public bridges or commercial aircraft.
Given that this is Malta in particular, the author probably wants to avoid going there for a bit. It's a country full of organized crime and corruption where people like him would end up with convenient accidents.
That's not how any of this works. You are basically arguing for the right to hide criminal actions. Filing with the CSIRT is the only legal action for the white hat to take. This is explicitly by design. Complaining about it is like complaining the police arrested you for a crime you committed.
> If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper
To match this metaphor to TFA, the architect has to break in to someone else's apartment to prove there's a flaw. IANAL but I'm not positive that "I'm an architect and I noticed a crack in my apartment, so I immediately broke in to the apartments of three neighbours to see if they also had cracks" would be much of a defence against a trespass/B&E charge.
I work with a firm that has an annual pen test as part of its SOC2/GDPR/HIPAA audit, and it's basically an exercise in checking boxes. The pen test firm runs a standard TLS test suite, and a standard web vulnerability test suite, and then they click buttons for a while...
The pen test has never found any meaningful vulnerabilities, and several times drive-by white hats have found issues immediately after the pen test concluded
I guess I should feel lucky they didn't try to have me criminally prosecuted.
The throwaway email was for the ""reliable"" ones. The trusted names. Or those without over-reaching data collection (one big name, Cheffin, one of the reported one, had over-reaching habit).
Having a throwaway alias proved benefitial. From zero spam to my email suddenly spam started to arrive with about 4 / week frequency. Kept coming until the alias got disabled. Cannot tell which was the culprit, only have a shortlist based on timing. But that never ever elsewhere used email somehow got to fraudster elements from the few UK property agent organizations I contacted. In very shor time (few weeks).
blah1381812301.318719@somedomain.com would never be guessed.
Or you can have a catchall email address on your own domain, where anything sent to any alias on your domain gets forwarded to your own address. Then hamburger@myDomain.com and mcdonalds@myDomain.com goes to your real private address. you don't have to set it up. Anytime you join a new service, say reddit, you tell them your address is "reddit@myDomain.com".
All of these have a level of pain associated with them. And they aren't that private. The government could no doubt get a court order to pierce the obscured email addresses.
There's proton email and many others. All of these are too painful for most people.
I have wondered if people who want to be really secret set up a chain of these anon mail forwarding systems.
In practice, this works about 95-99% of the time. Some websites will refuse the + as an invalid special character, and the worst of the worst will silently strip it before persisting it, and may or may not strip it when you input your email another time (such as when you're logging in or recovering your password).
I also suspect spammers strip out subaddresses frequently, very little of the spam I receive includes the subaddress.
So the only 100% reliable way is to use your own domain, but you don't need to run your own custom mail server
So far I've been happy. I hope I'll stay happy.
So if your Gmail is foo.bar@gmail.com you can use foo.bar+servicename@gmail.com and the mail will still end up in your mailbox. Then you can create a rule that sorts incoming mails accordingly.
Huh, apparently they're registered in Malta, what a coincidence...
https://www.reddit.com/r/scuba/comments/1r9fn7u/apparently_a...
Dead Comment
They left more than enough clues to figure out that this is DAN (Divers Alert Network) Europe.
Ironically, this will garner far more attention and focus on them than if they had disclosed this quietly without threats.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
Or maybe they took what they know to sell to the black hats.
Dead Comment
Deleted Comment
But was it? Is it pen testing to remove part of an URL? People debated this question a bit in articles, but then the case was dropped. The line between pen testing and just normal usage of the internet is not a clear line, but it seems that we all agree that there is a line somewhere and that common sense should guide us in some sense.
So he had two options: take his clients and his business to another insurer (and still inform all his current and previous clients about their outstanding risk), or try to help the insurer resolve the risk.
> And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It's the response that tells you everything about an organization's security culture.
See. The moral of the story is that the entity care more about their face than the responsibility to fix the bug, that's the biggest issue.
He also pointed out bugs do happens and those are reasonable, and he agreed to expose them in an ethical manner -- but the goodwill, no matter well or ill intentioned, those responses may not come with the same good tolerations, especially when it comes to "national" level stuff where those bureaucrats knows nothing about tech but they knew it has political consequences, a "deface" if it was exposed.
Also, I happened to work with them before and know exactly why they have a lot of legal documents and proceedings, and that's because of bureaucracy, the bad kind, the corrupt kind of bureaucracy such that every wrong move you inflicted will give you huge, if not capitcal punishment, so in order to protect their interest, they rather do nothing as it is unfortunately the best thing. The risk associated of fixing that bug is so high so they rather not take it, and let it rot.
There's a lot of system in Hong Kong that is exactly like that, and the code just stay rotten until the next batch of money comes in and open up new theatre of corruption. Rinse and repeat
https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...
This isn't directly applicable to your point, but I need to correct this. They weren't guessing tons of passwords, they were were trying one password on a large number of accounts.
> I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after.
No notion of a script, "every password" out of a set of a single default password may be open to interpretation, no mention of data downloads (the wording suggests otherwise), no mention of actual number of accesses (the text suggest a low number, as in "minimum access necessary to confirm the scope").
Still, some data was accessed, but we don't know to what extent and what this actually was, based on the information provided in the article. There's a point to be made about the extent of any confirmation of what seems to be a sound theory at a given moment. But, in order to determine whether this is about a stalled number generator or rather a systematic, predictable scheme, there's probably no way around a minimal test. We may still have a discussion, if a security alert should include dimensions like this (scope of vulnerability), or should be confined to a superficial observation only.
> That's it. No rate limiting. No account lockout.
To me, if he confirmed that there’s no rate limiting on the auth API, this implies a scripted approach checking at least tens (if not more) of accounts in rapid succession.
We need a change in law but more to do with fining security breaches or requiring certification to run a site above X number of users.
It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization
If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice
To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example
Germany is not exactly well-known for having reasonable IT security laws
He also didn't need to run the script to try more than one or maybe two accounts to verify the problem. He dumped more database than he needed to and that's something the law doesn't particularly like.
People don't like it when they find a well-intentioned lock specialist standing in their living room explaining they need better locks. Plenty of laws apply the same logic to digital "locksmiths".
In reality, it's pretty improbable in most places for the police to bother with reports like these. There have been cases in Hungary where prestigious public projects and national operations were full of security holes with the researchers sued as a result, but that's closer to politics than it is to normal police operations.
No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.
Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.
Since this is a Maltese company I would assume different rules apply, but no clue how this is dealt with in Malta.
How the company reacted is bad, no question, but I can’t glance over the fact how the person did the initial „recon“.
Deleted Comment
Knowing the front door is unlocked does not mean you can go inside.
> "Whatever Europe is doing, do the opposite"
on brand
I had bought a ticket, which arrived as a link by email. The URL was something like example.com/tickets/[string]
The string was just the order number in base 64. The order number was, of course, sequential.
I emailed the organizer and the company that built the order system. They immediately fixed it... Just kidding. It's still wide open and I didn't hear anything from them.
I'm waiting for this year's edition. Maybe they'll have fixed it.
Writing about it is essentially confessing. You need a lawyer, and a good one. And you need to read about these things.
That logic is garbage and assumes there is some arbitrary point at which a user should magically know the difference between a few IDs happening to be near each other versus a system wide problem. The law would use the interpretations of "knowingly", "intent" and in this case "reasonable".
This happens over and over again because for so many companies their natural thing is to hid any problem and threaten to sue anyone who discloses. Software problems have broken that typical behavior, to some extent.
I salute the author of this post who dared to do the right thing. I hope the company comes to their senses and doesn't try to punish the diving instructor. Over and over companies have tried this same "attack the problem reporter" strategy when software problems are revealed.
Deleted Comment
Deleted Comment
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
Dead Comment