Readit News logoReadit News
ff7c11 · 4 years ago
By temporarily defacing the Sega website and modifying files I think they have crossed the line. Enumerating what access they have, rooting through S3 and reporting it is OK, but by messing around like script kiddies they can no longer claim good faith. Publicising that you've illegally defaced the website is a little silly. Of course, Sega should not have got themselves so completely owned. Sega deserved to be punished, but these VPN twits have clearly committed a crime and Sega should maybe sue their company.
dragontamer · 4 years ago
> Sega deserved to be punished

The store owner was gone on vacation, and thus the side of his store was riddled with graffiti. He deserved to get graffiti because he didn't take basic security precautions.

Fnoord · 4 years ago
You don't need to break security to spray the side of a store. You do need to break security to deface a website.

Analogies are analogies, they're unnecessary in this case (nowadays). Because we got law to punish people who deface a website, and the law stands on its own.

Its akin to people who call 'copyright infringement' 'theft'. Its not the same, its a different mechanic, damages are different, and... different laws apply. That doesn't mean one's right or wrong or anything like it; like I said: the laws stand on their own, respectively.

wwtrv · 4 years ago
To me this situation seems more like a store owner forgetting to lock the door the somebody noticed, came inside put up a sign on the front window saying that the store owner is too stupid to lock his own door and then calling the owner to tell him about this.
throwawaygh · 4 years ago
I think "deserves" is a better word than "deserved".

The punishment for grossly negligent handling of PII should not be a childish website defacement, and should not be from enforced by vigilantes. Obviously.

The punishment for mishandling PII like this should be a painful fine, a rigorous externally imposed technical audit, and possibly civil/criminal implications for senior leadership.

(If the last one sounds unreasonable, consider Equifax. Many executives in charge of security orgs do not have technical degrees and, more importantly, have not booked any time in the trenches. Being self-taught and having non-engineering degrees can be okay, but combining that with no in-the-trenches experience is inexcusable. Assignment security to corporate politicians who don't understand the work that they are managing should be criminally negligent.)

matheusmoreira · 4 years ago
It's more like a store owner who left all his customer's names, addresses, credit cards, purchasing history and everything else just lying out there in the open. Public embarrassment is too light a punishment for the inevitable day when someone else comes and takes it. The real victims are all the people harmed by their negligence.
123pie123 · 4 years ago
they don't deserve to get graffiti, but it is expected

they should be punished by legal means (legal proceedings or lawsuits) and by reputational damage

EGreg · 4 years ago
So the store owner can just leave all his customers’ credit card information lying around and ignore PCI compliance etc. because anyone who would possibly use it for nefarious purposes is a criminal?

How would you prevent such negligence

burnished · 4 years ago
Strong disagree (not about the law claims, I'll leave that to the law-knowers), but the moral implications of 'crossing a line'. It reads like they revealed security vulnerabilities that had the possibility to harm others. I think they can be allowed some leeway in their methods.
throwoutway · 4 years ago
Nope. That can come after responsible disclosure. Did they try the responsible path first? Looks like they notified and then kept going for another 10 days
walrus01 · 4 years ago
it seems like there's a couple of hundred consumer-facing VPN service providers, all with slick looking marketing websites to sell you a $5/mo service.

lots of them are nothing more than 1 or 2 people and some rented 1U servers or dedicated servers somewhere on whatever ISP that can find with cheap IP transit / DIA rates. maybe a part time website design/graphic arts person they found via fiverr to make things look cool.

from the perspective of a colocation-specialist ISP or medium sized generalist ISP that offers colo, they get lots of weird requests for colo and dedicated server services from VPN companies they've never heard of before. often with something like a corporate entity that exists in cyprus, panama or even weirder places.

looking at this in terms of the risk that a VPN provider presents to an ISP's reputation, IP space, attracting unusual volumes and numbers of DDoS, etc... there is a certain amount of "KYC" (exact same idea as finance industry KYC) that needs to go into a potential vpn service provider as a colocation client before quoting them a price or accepting them as a customer. fail to do that at your own risk.

it's very much in the weird/shady/grey market end of the ISP market.

the level of technical acumen and professionalism varies greatly between VPN providers.

rosndo · 4 years ago
> often with something like a corporate entity that exists in cyprus, panama or even weirder places.

Wait? How is Cyprus supposed to be a weird place to incorporate?

I suppose Delaware is weird too? It’s not like anyone is actually based there.

>looking at this in terms of the risk that a VPN provider presents to an ISP's reputation, IP space

None, because you obviously make the VPN provider bring their own IPs. And even if you don’t? Just block email and the IP reputation issue is solved.

>attracting unusual volumes and numbers of DDoS, etc..

This has calmed down so so much over the past years.

> fail to do that at your own risk.

Not much risk at all as long as you make them prepay their bills. Nobody is getting depeered because they offered colo to a sketchy VPN provider.

Literally nothing can happen, the big ISPs do not give a single fuck about this.

(I don’t have any involvement with VPN nonsense, but do have extensive experience with “bulletproof” hosting)

tomrod · 4 years ago
Who are reputable in the space?
kiklion · 4 years ago
> By temporarily defacing the Sega website

I may have missed it but what did they deface?

I see a proof of script execution in what appears to be an uploaded file of a random string of letters and numbers .htm address.

So if don’t correctly there is a near zero chance of any public user stumbling into the site.

whoknew1122 · 4 years ago
They clearly said they modified careers.sega.co.uk and posted a screenshot of the careers site displaying vpnoverview's logo (https://vpnoverview.com/wp-content/uploads/screenshot-about-...)
foldr · 4 years ago
>Sega deserved to be punished

I don't understand this way of thinking. They made a serious security oversight, but that doesn't mean that they deserve to have their website defaced.

nulbyte · 4 years ago
> Sega deserved to be punished, but these VPN twits have clearly committed a crime

I think the rest of the sentence makes it clear the author didn't intend to support defacement as punishment.

totalZero · 4 years ago
Nah man, don't blame the victim. If I don't lock my door it doesn't mean that I have invited burglars into my home.

Dead Comment

aaronwp · 4 years ago
Sega Europe left AWS S3 creds laying around in a server image on downloads.sega.com. I was able to use them to enumerate a bunch of storage, dig out more keys, and mock up a spear phishing attack against the Football Manager forums.

All the keys and services are secure and the breach is closed.

phnofive · 4 years ago
Is it common, now or historically, to follow up a notification of compromise with self-directed PoC and privilege escalation exercises on the resources of a company with which you're not under contract? My naïve take is that this was a series of well-intentioned but possibly criminal actions used to illustrate a lesson we could all be reminded of from time to time.

Also, the HackerOne page doesn't appear to be claimed by SEGA Sammy, so notices might dead-end there as well.

throwawaygh · 4 years ago
> Is it common, now or historically

Historically: yes.

Now: no.

> possibly criminal

Sans some sort of formal agreement (which platforms like HackerOne might facilitate), it's definitely criminal. (IMO at least not unethical, to be clear.)

Again, sans some sort of contract either one-off or platform based. If SEGA wanted a prosecution, they would almost certainly be able to convince a prosecutor to press charges. The prosecutor would certainly get a guilty verdict. (Or, much more likely, a guilty plea with a bit of prison time and stiff probation.)

This still happens from time to time in much more ambiguous situations. E.g., https://www.nytimes.com/2021/10/15/us/missouri-st-louis-post...

Fortunately, there's a bit of a gentleman's detente among reasonable white hats and reasonable companies. But if you venture much outside of the small set of companies who rely on and have technologists in senior leadership, the story changes fast.

voakbasda · 4 years ago
Yup, this was totally criminal in most jurisdictions. I don’t care if the person intended to help; this kind of vigilante hacking deserves to land you in prison.

You want a bounty? Talk to me before you break into my systems. Because once you do that without my permission, you have proven yourself completely unworthy of being trusted. Why should I believe that you have not installed a rootkit or other tech that you did not subsequently disclose?

You will need to be treated the same as any other criminal. If my insurance gets involved, that also probably means directly assisting with an attempt at criminal prosecution.

So, yeah, brilliant strategy. /s

aaronwp · 4 years ago
Yes, if PII is involved it's common to run an audit like this. In addition to the access keys on the server image, Sega also accidentally published a database export containing PII. In order to write a comprehensive disclosure I have to investigate thoroughly.

And yeah, there's no branding or information on HackerOne. Even if this had been in scope, I would have thought twice about submitting anything. Our publishing standards match HackerOne ethical disclosure standards.

ta3927590 · 4 years ago
Historically, definitely. Currently? Fairly common. However, what's both historically and currently uncommon is having the sense to not do so while also identifying yourself. For the h4x0r cred, or whatever. Which is of course childishly idiotic, but makes my job a whole lot easier. In my experience, if you're not under any such contract and even if you are going to report such a compromise in complete good faith and have done no damage, you are far better off doing so as anonymously as possible. Nobody likes to be embarrassed, and it's a lot simpler for a corporation with a stock price and public image to think about to pin the whole situation on those damn hackers than own up to even the slightest degree of incompetence. Typing at work in sort of a hurry so, please forgive grammatical issues.
vmception · 4 years ago
Should have just left it at that and collected the bug bounty, defacing for a proof of concept and telling everyone pretty much makes you ineligible in any white hat program. Can I get dibs on your flat while you're in the... camp?
duxup · 4 years ago
> dig out more keys

I guess that if they leave them lying around that it is likely there are more.

jsploit · 4 years ago
> I was able to use them to enumerate a bunch of storage, dig out more keys

That's unethical and likely criminal without explicit testing authorization (which it appears you didn't have).

I wonder if there are any examples of "researchers" being sued/prosecuted for stunts like this.

imwillofficial · 4 years ago
This would be awesome as a blog post if you ever want to go into detail on how you executed each step.

Deleted Comment

robtaylor · 4 years ago
Assertive: Show me something else.
aaronwp · 4 years ago
stay tuned
0xbadcafebee · 4 years ago
A good example of how the usability of your product directly affects security.

AWS has multiple forms of credentials. IAM Users (static keys tied to a specific user identity) are one form. But you can also authenticate via SAML or OIDC. If you use SAML/OIDC, you can enforce temporary IAM credentials, audit who authenticated, expire credentials, enforce password rules & MFA, etc.

Because IAM Users are the easiest thing to set up, that's what everyone does. And that leads to compromises. If, on the other hand, IAM Users were more difficult to set up than SAML/OIDC, then everyone would use SAML/OIDC and temporary credentials. And that would mean giant compromises like these would be much rarer, because it would eliminate the easiest form of compromise: people putting static, non-expiring keys where they shouldn't be.

So when you develop a thing, think about the consequences of it, and design it so that users are more inclined to use it in a way that leads to good outcomes. That might even mean making parts of it intentionally hard to use.

watermelon0 · 4 years ago
When allowing 3rd parties to access your AWS resources, IAM keys are in most cases the only way to achieve this.

For example, most CI/CD systems don't support OIDC yet, so you have to add IAM keys to them. GitHub Actions is a notable exception here.

twistedpair · 4 years ago
I'd push back on that norm.

I listened to a vendor pitch for a product that would need access to my cloud assets. They wanted me to export auth keys as strings and hand them over, with super high access rights. I laughed and pointed out OIDC, Workload Identity Federation, cross account user identities... etc as more secure methods that didn't require handing over any secrets.

Multi-billion dollar vendor; their engineer just gave me a blank stare as if the notion was completely novel. It's not. None of the products/integrations I build require a customer to share their cloud creds to work w/ their cloud assets.

2020 is calling...

0xbadcafebee · 4 years ago
Maybe not OIDC, but most support SAML, which is good enough.

There's also SAML/OAuth2/OIDC proxies you can use along with role-specific service accounts, so even legacy app access can be audited and controlled with temporary sessions. One OAuth2 reverse proxy can authenticate entire subdomains worth of web apps. (https://oauth2-proxy.github.io/oauth2-proxy/)

If some proprietary app says they only support static IAM keys, they can easily enable the AWS SDK to transparently handle AWS STS temporary credentials. You just authenticate with some other app (say, saml2aws) and the tokens are cached locally, and the AWS SDK takes care of the rest. (You can also configure the AWS SDK's credential_process feature to make that seamless)

Cross-account AWS access can be granted to specific roles to be assumed with specific IAM policies. No keys or users at all.

dc-programmer · 4 years ago
If the third party has their own IAM users, you can create a cross-account trust relationship where you allow their IAM entity to assume a (scoped-down) role in your account. Then they are able to retrieve temporary credentials to assume this role.
banana_giraffe · 4 years ago
Reminds me of this I stumbled across for ngrok:

> Can I run my own ngrok server? > Yes, kind of. You may license a dedicated installation of the ngrok server cluster for commercial use. You provide us with keys to an AWS account and we will install the server cluster software into that account

I have no idea how common this pattern is, but personally, the idea of giving someone else AWS creds that aren't _very_ locked down scares me.

drjasonharrison · 4 years ago
This also occurs when your AWS resources need to access 3rd party services. Some services don't have temporary key support.
rosndo · 4 years ago
It’s hilarious to see people generating content like this to push their VPN affiliate marketing schemes.
walrus01 · 4 years ago
If there's one thing that I can absolutely rely upon, it's for VPN service providers to use any and every form of shady grey marketing sales technique that exists.
politelemon · 4 years ago
If I'm understanding correctly, a whole bunch of credentials, like IAMs, DB passwords, Steam keys, and MailChimp keys were lying around in S3 buckets.

But I don't understand the use case, what would be the purpose of uploading those details into S3 buckets? Or I suppose I'm trying to reverse engineer the situation where the dev/ops team decided to do this.

grogenaut · 4 years ago
S3 keeps secrets out of source code, so you at least don't have to purge git history and can lock access down to "internal developers", and can relatively easily rotate the creds, just find everything in the creds bucket (instead of searching all your code).

Handling of secrets has gone through many rapid iterations in the cloud lately since around 2013.

For AWS: In Source. In a magic file that lives on build machine. In S3 with crypto at rest that you can pull when you boot your machine, or dynamo, or DB, just one boot password or IAM role to get you access to the rest. Then in Envvars for the service. Then Secrets manager / SSM Parameter store, more recently.

Various organizations and pieces of software are somewhere along this curve. And the less cared for this software is (or even known about, people forget software), the further back on the curve it likely is.

Beyond the above methods that is a more constant rotation behavior similar to Hashi Vault using SSM/Secrets manager. And a drive to require all systems to use constantly rotating credentials (no static creds). I'm not sure what comes after that.

However what system you use is highly dependent on your organizational maturity and internal threat model.

drjasonharrison · 4 years ago
Rather than use a password manager, or credential store, or some other secure way to keep these credentials safe while providing access to internal developers for development purposes, they put them on S3.

Here's an example I have seen: - env file is needed for development to run a service on development machine and to access the staging deployment - the credentials in the env file aren't per-developer because that requires work to setup accounts for every developer with the staging hosting service - so make a copy of the credentials, put them in an env file on the NAS - NAS isn't available from home or from other network locations - so make a copy of the env file in the cloud

If the S3 bucket hadn't been public they probably would have been fine.

ljm · 4 years ago
One can only speculate but I can't imagine how many companies will avoid investing in security here, because they think that the secrets in their git repos and S3 buckets are perfectly safe, and they allow some people to skip 2FA because it's too inconvenient for them, and some people have root access on AWS because it's easier, etc. Maybe even giving the job to people who don't have much experience in the field and are still learning how to set up things in the cloud.

A publicly accessible S3 bucket suggests that someone mistakenly thought it was private, even by obscurity.

ff7c11 · 4 years ago
Also, if you don't have a public access block in place, a private bucket can contain public files! Even if you can't list the files in the bucket, there are tools which try to guess common file names from guessed bucket names e.g. sega-secret-sauce.s3.amazonaws.com/.env - if someone uploaded a file there without setting the ACL correctly there could be an unprotected file in the private bucket.
isbvhodnvemrwvn · 4 years ago
Why they used users in the first place I don't know, but for IAM credentials - I've seen people using Terraform to generate the users and access keys, and storing the access keys in the terraform state (you can't access secret keys after they are generated), and the entire state of Terraform is typically stored in something like an S3 bucket.

It's definitely not a great practice, but still it's done.

isbvhodnvemrwvn · 4 years ago
If you're running on AWS, why would you even have long-lived credentials in your images?
whoknew1122 · 4 years ago
There's a few reasons, none of them good.

Likely the answer is gross incompetence.

If I were to give them the benefit of the doubt and provide the most defensible reason to have an image that contains AWS credentials, you could theoretically use long-term (i.e. user) AWS credentials on an on-premises VM and then export the server image to AWS. When you rehost the server in EC2, you would switch to an instance role per best practices. And then you forget to delete the image stored in S3.

Still doesn't explain why the S3 bucket is publicly available. But that's one reason a server image with long-term credentials could end up stored in an S3 bucket.

Unlikely that the image was an EBS snapshot or AMI. While those are technically housed in S3, you can't access them from the S3 console. And they didn't brag about accessing the EC2 console.

batch12 · 4 years ago
So the breach referenced was a breach by the researchers, not a malicious third party (that we know of)? I would have called it exposure or a vulnerability since breach has a specific meaning that I am not sure this fits. Maybe I am being pedantic.
thr0wawayf00 · 4 years ago
"Breach" is a legal term, and although IANAL, it seems semantically correct here. When anyone outside of your organization gains access to sensitive information in your systems, regardless of their intent, that is a breach and these guys accomplished that. PCI and all of those other security protocols and programs don't draw the line at white-hat access vs black-hat access.
batch12 · 4 years ago
I agree mostly. I don't think an unsanctioned assessment that goes this deep is pure white hat. It seems firmly gray to me.

> PCI and all of those other security protocols and programs don't draw the line at white-hat access vs black-hat access.

PCI mandates penetration tests. A white hat finding as a pentest isn't reportable as a breach. This one may be unless some gymnastics are used to call it an authorized test.

ipaddr · 4 years ago
Another grow marketing hack successful. Double if their is a lawsuit.