Readit News logoReadit News
eatbots · a year ago
Reported this exact bug to Zendesk, Apple, and Slack in June 2024, both through HackerOne and by escalating directly to engs or PMs at each company.

I doubt we were the first. That is presumably the reason they failed to pay out.

The real issue is that non-directory SSO options like Sign in with Apple (SIWA) have been incorrectly implemented almost everywhere, including by Slack and other large companies we alerted in June.

Non-directory SSO should not have equal trust vs. directory SSO. If you have a Google account and use Google SSO, Google can attest that you control that account. Same with Okta and Okta SSO.

SIWA, GitHub Auth, etc are not doing this. They rely on a weaker proof, usually just control of email at a single point in time.

SSO providers are not fungible, even if the email address is the same. You need to take this into account when designing your trust model. Most services do not.

quacksilver · a year ago
I do web app testing and report a similar issue as a risk rather often to my clients. You can replace Google below with many other identity providers.

Imagine Bob works at Example Inc. and has email address bob@example.com

Bob can get a Google account with primary email address bob@example.com. He can legitimately pass verification.

Bob then gets fired for fraud or sexual harassment or something else gross misconduct-y and leaves his employer on bad terms.

Bob still has access to the Google account bob@example.com. It didn't get revoked when they fired him and locked his accounts on company systems. He can use the account indefinitely to get Google to attest for his identity.

Example Inc. subscribes to several SaaS apps, that offer Google as an identity provider for SSO. The SaaS app validates that he can get a trusted provider to authenticate that he has an @example.com email address and adds him to the list of permitted users. Bob can use these SaaS apps years later and pull data from them despite having left the company on bad terms. This is bad.

I think the only way for Example Inc. to stop this in the case of Google would be to create a workspace account and use the option to prove domain ownership and force accounts that are unmanaged to either become managed or change their address by a certain date. https://support.google.com/a/answer/6178640?hl=en

Other providers may not even offer something like this, and it relies on Example Inc. seeking out the identity providers, which seems unreasonable. How do you stop your corporate users signing up for the hot new InstaTwitch gaming app or Grinderble dating service that you have never heard of and using that to authenticate to your sales CRM full of customer data?

laz · a year ago
You don't need full blown workspace, which costs money, you can set up "cloud identity free" and claim the domain.

When you're setting it up, you can choose what to do with any existing accounts that are part of your domain: kick them out or merge them in.

rkharsan64 · a year ago
Every time I've left an organization, they have swiftly deleted the company email address/revoked my access to it. I assume every reasonable organization will have processes in place to do this.

I don't see this as a vulnerability: how is Google supposed to know that a person has left the company? You let them know by deleting the account.

wutwutwat · a year ago
This is why you store and match on the sso provider’s uuid, not on the email address. Emails are not a unique identifier and never have been. Someone can delete their email account and later someone else can sign up and claim the same email. Anyone matching on email addresses is doing it wrong. I’ve tried to argue this to management at companies I’ve worked at in the past, but most see my concern as paranoid.
hunter2_ · a year ago
I wonder why Google would make an SSO assertion along the lines of "yes, this user Bob has email address bob@example.com" in the situation where example.com is not under a Workspace account. Such assertions ought to be made only for Workspace (and Google's own domains such as gmail.com, googlemail.com, etc.) since outside of that it's obsolete proof as you say, i.e. it's merely a username of a Google account which happens to look like an email address, and nothing more.
oliwary · a year ago
Perhaps the following could be a solution to this issue?

Any OAuth provider should send a flag called "attest_identity_ownership" (false, true) as part of the uaht flow, which is set to true if the account is a workspace account or gmail (or the equivalent for other services), and false if the email is an outside email. Thus, the service handling the login could decide whether to trust the login or proceed otherwise, e.g. by telling the user to use a different OAuth service/internal mechanism where the identity is attested.

kccqzy · a year ago
If anyone needs motivation of such unmanaged users, I actively use this feature. I have my own Google Workspace on my own domain. Years ago when I bought a Nest product I found that I couldn't use a Google Workspace account to access Nest. No problem, I create a consumer Google account under my Google Workspace domain. The email looks just like a Workspace account. And it doesn't need any additional Workspace licenses. (I no longer plan to buy any more Nest devices so I'll delete the account once my last Nest product stops working.)
austinkhale · a year ago
Presumably one of the PMs you’re referring to has posted this article for additional information. Feels like they’re doubling down on their initial position.

https://support.zendesk.com/hc/en-us/articles/8187090244506-...

8n4vidtmkvmk · a year ago
> Although the researcher did initially submit the vulnerability through our established process, they violated key ethical principles by directly contacting third parties about their report prior to remediation. This was in violation of bug bounty terms of service, which are industry standard and intended to protect the white hat community while also supporting responsible disclosure. This breach of trust resulted in the forfeiture of their reward, as we maintain strict standards for responsible disclosure.

Wow... there was no indication that they even intended on fixing the issue, what was Daniel hackermondev supposed to do? Disclosing this to the affected users probably was the most ethical thing to do. I don't think he posted the vulnerability publicly until after the fix. "Forfeiture of their award" -- they said multiple times that it didn't qualify, they had no intention of ever giving a reward.

thekevan · a year ago
So when the researcher said it was a bug, they said, "No, it's fine. No bug bounty, sorry."

THEN the researcher eventually goes public.

Later, Zendesk announces the bug and the fix and says there will be no bug bounty because the researcher went public.

Is that how it went? I mean if so, that's one way to save on bug bounties.

teddyh · a year ago
That article claims to have “0 comments”, but currently sits at a score of -7 (negative 7) votes of helpful/not helpful. I think they have turned off comments on that article, but aren’t willing to admit it.

EDIT: It’s -11 (negative 11) now. Still “0 comments”.

Shank · a year ago
In damage control mode, Zendesk can't pay a bounty out here? Come on. This is amateur hour. The reputational damage that comes from "the company that goes on the offensive and doesn't pay out legitimate bounties" impacts the overall results you get from a bug bounty program. "Pissing off the hackers" is not a way to keep people reporting credible bugs to your service.

I don't understand what this tries to accomplish. The problem is bad, botching the triage is bad, and the bounty is relatively cheap. I understand that this feels bad from an egg-on-face perspective, but I would much rather be told by a penetration tester about a bug in a third-party service provider than not be told at all just to respect a program's bug bounty policy.

JonChesterfield · a year ago
We, the company that doesn't understand security, can't tell whether this was exploited, therefore we confidently assert that everything is fine. It's self consistent I suppose but I wouldn't personally choose to scream "we are incompetent and do not care" into the internet.
dclowd9901 · a year ago
As a former ZD engineer, shame on you Mr Cusick (yes, I know you personally) and shame on my fellow colleagues for not handling this in a more proactive and reasonable way.

Another example of impotent PMs, private equity firms meddling and modern software engineering taking a back seat to business interests. Truly pathetic. Truly truly pathetic.

Deleted Comment

harrisonjackson · a year ago
Ah, that makes a lot of sense. This is a foot gun that you can run into even with an auth provider like Auth0 or Clerk let alone rolling your own.

Directory SSO: These are systems like Google Workspace or Okta, which maintain a central directory of users and their access rights.

Non-directory SSO: These are services like "Sign in with Apple" (SIWA) or GitHub authentication, which don't maintain such a directory.

cxcorp · a year ago
This is very important to keep in mind when implementing OAuth authentication! Not every SSO provider is the same. Even if the SSO provider tells you that the user's email is X, they might not even have confirmed that email address! Don't trust it and confirm the email yourself!
hmottestad · a year ago
And remember to add a random unique id to the reply-to email, otherwise you’ve fallen into the same trap.
bushido · a year ago
Out of curiosity, do you know of open source projects or any resources that someone less familiar with SSO can use/read to properly implement SSO?
bitexploder · a year ago
Use OIDC. It is based on Oauth. I would fiddle with implementing basic Oauth clients first. Like a Spotify playlist fetcher or something. Just to start getting a feel for the flows and things you would be concerned with.
to11mtm · a year ago
Not the best suggestion but haven't seen others give any yet...

IdentityServer4 [0] is no longer maintained [1] but had SSO support and the source is still on github.

[0] - https://identityserver4.readthedocs.io/en/latest/

[1] - They had to go commercial to stay afloat, there wasn't enough contributions from community/etc. That said it's pretty cheap for what it does in the .NET space.

siddthesquid · a year ago
Something like Keycloak?
LilBytes · a year ago
Keycloak (Java) and Zitadel (Go) are my recommendations.
marcellus23 · a year ago
Can you explain a bit more what makes Sign in with Apple different from Google Sign-in? Apple certainly does maintain a list of users with accounts. So what does "non-directory" mean here exactly? Why can Apple not attest that you control that account at sign-in time?
crusty · a year ago
Now? nothing. I think this thinking is a relic of Google's status as seemingly the last remaining email provider to automatically create a Gmail account when signing up for a Google account. So using Google SSO meant using your Gmail account, and so control of the email address was nessissary for control of the Google account. If you lose the email account, you lose the Google account. This is not true anymore since you can sign up for a Google account with any email.

Whereas you can (and I believe always could*) create an apple ID with any old email address.

*Maybe this delinked situation only came about when they added the App Store to OS X, and figured they'd make less money if they require existing Mac users to get a new email account in order to buy programs in the manner which would grant them a cut.

Apple has a list of all the email addresses for its sole IDs, but it doesn't control them, and having one deleted doesn't nessisarilly affect the other.

Google and custom domain email have always been delinked from this perspective. You could create a Google account with a custom domain and then point the domain elsewhere or lose control of it, and you'd still retain conto of the account.

Basically, the required example essentially theoretical at this point - maybe it works for employers at companies that also happen to provide SSO services. So if you work at Facebook, Google, Apple, or github and have a me@FGAG.etc.com email dress, and you signed into slack through the SSO that affiliated with your company and the company email, but later don't work there and you've had your work account access revoked, you won't be able to use that SSO to sign into slack. That's what they mean by directory control or whatever.

In contrast, if you sign up to github with your work email account, unless it's a managed device managed by your work, your work doesn't actually control the account. They just vouched for your affiliation at sign up when you verified your email. So if you use a github SSO to sign up for a service that 'verifies' your work email address from github during the process, that won't change when you leave and the company revoked access to the email. Github SSO, in this case, isn't verifying you have an email account @company.com. They are verifying you once did, or at least once had, access to it. This is what they mean by the non-directory whatever.

lathiat · a year ago
I think what he means is, if you have an @gmail.com account via Google, that is pretty good proof of control. But if you have any other e-mail (e.g. a custom domain) via Google, it's not.

Similar with Apple, if you were signing in with an @icloud.com, it's pretty good proof, but if you have an Apple ID with a third-party e-mail it's not proof of current control of that e-mail.

That's my guess.

ec109685 · a year ago
Not only is Apple non-directory, they are non-discretionary as well, so foisted on services and handled poorly as a result.
ryukoposting · a year ago
Isn't the simplest solution here to not support SSO at all?

I get there's a convenience factor, but even more convenient is the password manager built into every modern browser and smartphone. If the client decides to use bad passwords, that's will hurt them whether or not they're using SSO.

pas · a year ago
SSO is fine, but verify the email address that the SSO provider has given (unless the provider is authoritative for the email domain)
bsbsjsusj · a year ago
Django all auth seems to know this with social provider specific email settings.
mjomaa · a year ago
You mean automatic account linking with unverified emails?
bsuvc · a year ago
It sounds like the author got stiffed by Zendesk on this bug, $0 due to email spoofing being out of scope.

The $50k was from other bug bounties he was awarded on hackerone.

It's too bad Zendesk basically said "thanks" but then refused to pay anything. That's a good way to get people not to bother with your big bounty program. It is often better to build goodwill than to be a stickler for rules and technicalities.

Side note: I'm not too surprised, as I had one of the worst experiences ever interviewing with Zendesk a few years back. I have never come away from an interview hating a company, except for Zendesk.

bigiain · a year ago
> That's a good way to get people not to bother with your big bounty program.

And possibly to have blackhats to start looking more closely, since they now know both 1) that whitehats are likely to be focusing elsewhere leaving more available un-reviewed attack surface, and 2) that Zendesk appears to be the sort of company who'll ignore and/or hide known vulnerabilities, giving exploits a much longer effective working time.

If "the bad guys" discovered this (or if it had been discovered by a less ethically developed 15 year old who'd boasted about it in some Discord or hacker channel) I wonder just how many companies would have had interlopers in their Slack channels harvesting social engineering intelligence or even passwords/secrets/API keys freely shared in Slack channels? And I wonder how many other widely (or even narrowly) used 3rd party SaaS platforms can be exploited via Zendesk in exactly the same way. Pretty much any service that uses the email domain to "prove" someone works for a particular company and then grants them some level of access based on that would be vulnerable to having ZenDesk leak email confirmations to anybody who knows this bug.

Hell, I suspect it'd work to harvest password reset tokens too. That could give you account takeover for anything not using 2FA (which is, to a first approximation over the whole internet, everything).

exceptione · a year ago
If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug, but HackerOne that did not escalate to Zendesk that they should reconsider the exclusion ground in this case.

As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets. I would have expected that the price on the black market for these kind of bugs are several figures larger.

chabons · a year ago
The author specifically stated: "Realizing this, I asked for the report to be forwarded to an actual Zendesk staff member for review", before getting another reply for H1. I read this as they escalated it to Zendesk directly, who directed it back to HackerOne.
richbell · a year ago
> If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug, but HackerOne that did not escalate to Zendesk that they should reconsider the exclusion ground in this case.

Correct, the replies seem to have come from H1 triage and H1 mediation staff.

They often miss the mark like this. I opened a H1 account to report that I'd found privileged access tokens for a company's GitHub org. H1 triage refused to notify the company because they didn't think it was a security issue and ignored my messages.

bigiain · a year ago
> If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug

While it's unclear at which stage Zendesk became involved, in the "aftermath" section it's clear they knew of the H1 report, since they responded there. And later on the post says:

"Despite fixing the issue, Zendesk ultimately chose not to award a bounty for my report. Their reasoning? I had broken HackerOne's disclosure guidelines by sharing the vulnerability with affected companies."

The best care scenario as I see it is that Zendesk has a problem they need to fix with their H1 triage process and/or their in and out of scope rules there. And _none_ of that is the researcher's problem.

The worst (and in my opinion most likely) scenario, is that Zendesk did get notified when the researcher asked H1 to escalate their badly triaged denial to Zendesk for review, and Zendesk chose to deny any bounty and tried to hide their vulnerability.

> As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets.

I think it's way worse than that, since internal teams often share logins/secrets/API keys (and details of architecture and networking that a smart blackhat would _love_ to have access to) in thei supposedly "internal" Slack channels. I think the fact that non Zendesk "affected companies" paid out $50k sets that as the absolute lower bound of "the real value of those bugs. And it's _obvious_ that the researcher didn't contact _every_ vulnerable Slack-using organisation. I wonder how much more he could have made by disclosing this to 10 or 100 times as many Slack using organisations, and delaying/stalling revealing his exploit POC to Zendesk while that money kept rolling in?

I'll be interested to see if HackerOne react to this, to avoid the next researcher going for this "second level" of bug bounty payouts by not bothering with H1 or the vulnerable company, and instead disclosing to companies affected by the vulnerability instead of the companies with the vulnerability? It's kinda well known that H1 buy bounties are relatively small, compared to the effort required to craft a tricky POC. But people disclose there anyway, presumably party out of ethical concerns and partly for the reputation boost. But now we know you can probably get an order of magnitude more money by approaching 3rd party affected companies instead of cheapskate or outright abusive companies with H1 bounties that they choose to downvalue and not pay out on.

mkagenius · a year ago
Hackerone staffs are not that good. They usually mark anything from a non famous person as a duplicate (even if it differs in nuances, which eventually lead to much more impact) or straight out of scope.

I think it's just laziness. Plus they hire previous famous reporter as the people triaging the reports, those famous people know other famous people first hand, they usually think "hmm, unknown guy, must have ran a script and submitted this"

I have stopped reporting stuff since last 5 years due to the frustration. And it seems the situation is still the same even after so many years.

Deleted Comment

layer8 · a year ago
> due to email spoofing being out of scope.

I believe their logic was that only the domain owner can adequately prevent email spoofing by proper SPF/DMARC configuration, and that it’s the customers’ fault if they don’t do that. Which isn’t entirely wrong.

UncleMeat · a year ago
In a past life I was involved in a bug bounty program. I don't think the reasoning is as detailed.

When you stand up a bug bounty program you get a ton of "I opened developer tools, edited the js on your page, and now the page does something bad" submissions. "I can spoof some email headers and send an email to myself that looks like it is coming from you" isn't something I've specifically seen due to some weird details about my bounty program but it is something I would absolutely expect for many programs to see.

So you need a mechanism to reject this stuff. But if that mechanism is just "triage says this is dumb" you get problems. People scream at you for having their nonsense bug rejected. People submit dozens of very slightly altered "bugs" to try to say "you rejected the last one for reason X but this one does Y." So you create a general policy: anything involving email spoofing is out of scope.

So then a real bug ends up in front of the triage person. They are tired and busy and look at the report and see "oh this relies on email spoofing, close as out of scope." Sucks.

I think that Zendesk's follow up here is crap. They shouldn't be criticizing the author for writing about this bug. But I do very much understand how things end up with a $0 payout for the initial report.

nightpool · a year ago
Right, but I would be really shocked if Zendesk's internal email handler was doing any SPF/DKIM/DMARC validation at all. So even if a domain has DMARC set up, Zendesk is probably ignoring it. Which is probably pretty reasonable given how rare DMARC reject/quarantine has been historically
johnmaguire · a year ago
Are Google and Apple not doing proper SPF/DMARC/DKIM? I think they probably are - but this attack worked anyway.

Zendesk wasn't validating the email senders.

Deleted Comment

renewiltord · a year ago
> $0 due to email spoofing being out of scope.

Strictly, $0 because he disclosed to customers. But he only disclosed to customers since Zendesk said it was out of scope.

jeroenhd · a year ago
HackerOne declared the issue out of scope so I don't see why disclosure would make a difference here. Had this person not notified different companies, they still wouldn't get a dime from HackerOne.

Bad showings all around, for both HackerOne and Zendesk.

pm90 · a year ago
I too had the worst interview experience with zendesk. The people I talked to were pretty senior folks too. They just seem to have a very petty and toxic work culture.
paulpauper · a year ago
That is why a black market exists for this stuff.
c0balt · a year ago
The black market also exists because the potential payout for serious 0days by official programs is almost always less than what a third-party adversary will pay (if the target(s) for them are worth it).
gouggoug · a year ago
> Side note: I'm not too surprised, as I had one of the worst experiences ever interviewing with Zendesk a few years back. I have never come away from an interview hating a company, except for Zendesk.

Same thing happened to me years ago. Interviewed with them and it was the worst “screening” experience I ever had. After getting a rejection email, I thanked them for their time and said I had feedback about the interview should they want to hear it. They said yes, please.

Sent my feedback, never heard from them again.

swoorup · a year ago
Same it was time-wasting interview experience. They seem interested and not interested at the same time. They pinged me for a different role after passing me up for the first role, but didn't get any response later..
Avamander · a year ago
This is a common problem with HackerOne and the likes. It's absolutely awful for anything even a tiny bit more unique or rare.
portaouflop · a year ago
Blame beg bounty hunters for this
barbs · a year ago
Mind giving details abut the interview? Must've been pretty bad!
junto · a year ago
I help corporates evaluate and buy software. Having an ineffective bug bounty program, especially one that rewards black market activity on a terms & conditions technicality like this, is enough for me to put a black mark on your software services.

I don’t care if you’re the only company in the market, I’ll still blackball you for this in my recommendations.

Zendesk should pay up, apologize and correct their bug bounty program. After doing so, they should kindly ask the finder to add an update to this post, because otherwise it will follow them around like dogshit under their shoe.

exceptione · a year ago
Yes, I think bounties in this class and with this impact should at least be six figures.

If a company loses 120 million a year to security bounties, they will take into account the cost of scrumming/rapid widget delivery.

moritonal · a year ago
Would love to see the parts of the market where you've marked off every current option, given each would represent new business opportunities.
xyst · a year ago
Probably any SK company. Bounties are awful and only paid out to SK citizens. Everyone else gets a pat on the back for being a sucker.
yieldcrv · a year ago
HackerOne’s mediator dropped the ball here

They should absolutely inform a client company of a perceived threat, when they agree on the threat

Most of the person’s post and responses here are about Zendesk’s issue, but Zendesk was never informed

for a better PR response, I think now Zendesk could reward this after realizing it wouldnt have been disclosed first, and admonish HackerOne for not informing them and the current policies there

daghamm · a year ago
This is pretty common on H1, probably due to the amount of crap they receive.

If you are a new user expect your first couple reports to be butchered. It seems to me only reports from well known hackers gets carefully analysed.

richbell · a year ago
> Most of the person’s post and responses here are about Zendesk’s issue, but Zendesk was never informed

It's not clear whether they were informed. The mediator's email says "after consultations with *the team*", which is likely referring to Zendesk's security team.

M4v3R · a year ago
Zendesk was informed. OP specifically said they asked h1 to escalate to the company itself and the second email they present way from someone from Zendesk, who still rejected them, adding that this decision was made “after consulting with the team”.
maeil · a year ago
A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd. They're putting out a huge sign saying "When you find a vuln, definitely contact all our clients because we won't be giving you a penny!".

Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.

Edit: Another user here has pointed out the reasoning

> It's owned by private equity. Slowly cutting costs and bleeding the brand dry

mmsc · a year ago
It all makes sense if you consider bug bounties are largely:

1) created for the purpose of either PR/marketing, or a checklist ("auditing"), 2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"

The amusing and ironic thing about the second point is that by doing so, you waste time with the constant spam of people begging for bounties and reporting things that are not even bugs let alone security issues, and your attention is therefore taken away from real security benefits which could be realized elsewhere by talented staff members.

ec109685 · a year ago
I don’t agree. Bug bounties are taken seriously by at least some companies. Where I have worked, we received very useful reports, some very severe, via HackerOne.

The company even ran special sessions where engineers and hackers were brought together to try to maximize the number of bugs found in a few week period.

It resulted in more secure software at the end and a community of excited researchers trying to make some money and fame on our behalf.

The root cause in this case seems to be that they couldn’t get by HackerOne’s triage process because Zendesk excluded email from being in scope. This seems more like incompetence than malice on both of their parts. Good that the researcher showed how foolish they were.

j0hnyl · a year ago
It's incredibly hard and resource intensive to run a bounty program, so anyone doing it for shortcuts or virtue signaling will quickly realize if they're not mature enough to run one.
NicoJuicy · a year ago
Our company has a bug bounty program:

- handled with priority, but sometimes it takes a couple of weeks for a more definite fix

- handled by the security department within the company ( to forward to relevant PO's and to follow up)

The unfortunate thing about bug bounties is that you will be hammered with crawlers that would sometimes even resemble a DDOS

wil421 · a year ago
“2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"”

It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.

richbell · a year ago
> you waste time with the constant spam of people begging for bounties

A great blog post on the matter https://www.troyhunt.com/beg-bounties/

kozikow · a year ago
> A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd.

I'll give an "another side" perspective. My company was much smaller. Out of 10+ "I found a vulnerability" emails I got last year, all were something like mass-produced emails generated based on an automated vulnerability scanning tool.

Investigating all of those for "is it really an issue" is more work than it seems. For many companies looking to improve security, there are higher ROI things to do than investigating all of those emails.

to11mtm · a year ago
https://www.sqlite.org/cves.html provides an interesting perspective. While they thankfully already have a pretty low surface area from overall design/purpose/etc, You can see a decent number of vulns reported that are either 'not their fault' (i.e. wrappers/consumers) or are close enough to the other side of the airtight hatchway (oh, you had access to the database file to modify it in a malicious way, and modified it in a malicious way)[0]

[0] - https://sqlite.org/forum/forumpost/53de8864ba114bf6

whstl · a year ago
We also had this problem in my previous company a few years ago, a 20-people company, but somehow we attracted much more attention.

In one specific instance, we had 20 emails in a single month about a specific Wordpress PHP endpoint that had a vulnerability, in a separate market site in another domain. The thing is, it had already been replaced by our Wordpress contractor as part of the default install, but it was returning 200.

But being a static page didn't stop the people running scanners from asking us from money even after they were informed of the above.

The solution? Delete it altogether to return 404.

DaiPlusPlus · a year ago
We have a policy to never acknowledge unsolicited emails like that unless they follow the simple instructions set-out in our /.well-known/security.txt file (see https://en.wikipedia.org/wiki/Security.txt) - honestly all they have to do is put “I put a banana in my fridge” as the message subject (or use PGP/GPG/SMIME) and it’ll be instantly prioritised.

The logic being that any actual security-researcher with even minimal levels of competency will know to check the security.txt file and can follow basic instructions; while if any of our actual (paying) users find a security issue then they’ll go through our internal ticket site and not public e-mail anyway - so all that’s left are low-effort vuln-scanner reports - and it’s always the same non-issues like clickjacking (but only when using IE9 for some reason, even though it’s 2024 now…) or people who think their browser’s Web Inspector is a “hacking tool” that allows anyone to edit any data in our system…

And FWIW, I’ve never received a genuine security issue report with an admission of kitchen refrigeration of fruit in the 18 months we’ve had a security.txt file - it’s almost as-if qualified competent professionals don’t operate like an embarrassingly pathetic shakedown.

Deleted Comment

maeil · a year ago
I understand, this is exactly why I noted "even on their 2nd chance". The initial lack of payout/meaningful response was incompetency by not understanding the severity of the vuln. Fine, happens.

But after the PoC that showed the severity in a way that anyone could understand, they still didn't pay. That's the issue. The whole investigation was done for them.

paulpauper · a year ago
If the bounty is big enough you basically need to retain a lawyer so the whole thing is done right and prevent being scammed.
gavingmiller · a year ago
zendesk is 6k employees, they have general council on staff
jejeyyy77 · a year ago
it never made sense to me why these white-hat hackers don't require payment before disclosing the vulnerability
tptacek · a year ago
Bug bounty people do this all the time. It's almost always a sign that your bug is something silly, like DKIM.

Later

I wrote this comment before rereading the original post and realizing that they had literally submitted a DKIM report (albeit a rare instance of a meaningful one). Just to be clear: in my original comment, I did not mean to suggest this bug was silly; only that in the world of security bug bounties, DKIM reports are universally viewed as silly.

aftbit · a year ago
Wait... it looks like Zendesk only fixed the issue of Apple account verification emails being added to tickets, not actually the underlying issue?

>In addition to this, we also implemented filters to automatically suspend the following classes of emails: User verification emails sent by Apple based on the Reply-To and Message-Id header values Non-transactional emails from from googleworkspace-noreply@google.com Over the coming months, we will continue to look into opportunities to strengthen our Sender Authentication functionality and provide customers with more gradual and advanced security controls over the types of emails that get suspended or rejected.

So is it still possible to hijack anyone's support tickets using the default configuration of Zendesk if you just happen to know their email and ticket ID?

vitus · a year ago
Yeah. Zendesk only put a bandaid in place to prevent this particular attack vector for the Slack infiltration attack, and did nothing for the initially reported issue.

Deleted Comment

layer8 · a year ago
Only the customer domain owners can fix the underlying issue, which is a missing SPF/DMARC configuration.
Aachen · a year ago
They could make the ticket IDs unpredictable so you can't subscribe yourself to any existing ticket by sending it an email
aftbit · a year ago
Zendesk could refuse to allow "ticket collaboration" if customers had a missing or insufficiently secure SPF/DMARC configuration, or at least make customers check a box that says "Tickets may leak their contents to anyone who can send emails".
cjbprime · a year ago
That doesn't sound right. Aren't these @zendesk.com addresses?
gavingmiller · a year ago
The piece the author is missing, and why zendesk likely ignored this is impact, and it's something I continually see submissions lacking. As a researcher, if you can't demonstrate impact of your vulnerability, then it looks like just another bug. A public program like zendesk is going to be swamped with reports, and they're using hackerone triagers to augment that volume. The triage system reads through a lot of reports - without clear impact, lots of vulnerabilities look like "just another bug". Notice that Zendesk took notice once mondev was able to escalate to an ATO[1]. That's impact, and that gets noticed!

[1] https://gist.github.com/hackermondev/68ec8ed145fcee49d2f5e2b...

patcon · a year ago
Yes. But respectfully (residual frustration at zendesk might make me curt here) if their security triage team can't see how dangerous it is for an attacker to get access to an arbitrary thread on a their CLIENT's corporate email chains (in this world of email logins and SSO), then they have a big lapse in security culture, no?

Yes, the researcher could have tee'd himself up better, but this says way more about zendesk than it does about the 15-year-old researcher.

XCabbage · a year ago
Unauthorized read access to private emails you were never legitimately CCed on already is impact. It should not be necessary to come up with a further exploit daisy chained on top of that in order to be taken seriously. (Otherwise why stop at Slack access? Why is that automatically "impact" if email access isn't?)
lysp · a year ago
Exactly.

It's possible that some chains could have credentials or other sensitive information in ticket chains.

ec109685 · a year ago
The researcher showed how they could hop onto any Zendesk support ticket thread with zero authentication, so that should have been enough given Zendesk was exposing customer data via that attack path.

Clearly Zendesk needs to change things so that the email address that is created for a ticket isn’t guessable.

Aachen · a year ago
Exploit or no, the bug and potential impact are the same. I personally find it a waste of time to sink evenings into an exploit when they're going to fix the bug anyway if I simply tell them about the problem. They also know the system better than I do and can probably find a bigger impact anyway

Of course, this is only a good strategy if you're just wanting to do a good deed and not counting on getting more than a thank you note, but Zendesk or Hackerone (whoever you want to blame here) didn't even accept the bug in the first place. That's the problem here, not the omission of an exploit chain

dclowd9901 · a year ago
The dude demonstrated the ability to infiltrate a client’s Slack instance via their vulnerability. If that’s not enough to make the hairs on your neck stand on end as an engineer, go fucking do something else.
thrdbndndn · a year ago
He didn't demonstrate this in his initial report to Zendesk.
tptacek · a year ago
I think this is (descriptively) correct, but it's a difficult point to make in a message board argument because of hindsight bias.
gavingmiller · a year ago
It’s a good callout, shouldn’t have editorialized like that.
davedx · a year ago
I don't think it is. Getting arbitrary access to corporate support ticket chains seems pretty high impact to me? Isn't that a gigantic data breach (also probably a GDPR breach) already, before you get to the Slack takeover?
23B1 · a year ago
"If you won't illustrate the impact of our mistake, we aren't obligated to listen to you" is peak CYA
gavingmiller · a year ago
Not even close to the point I was making: If you want to get taken seriously, write to audience.
oarla · a year ago
The worse part:"We kindly request you keep this report between you and Zendesk". After being notified of a problem on their side, them ignoring it, now they want to keep things hush hush? That's exactly what the author did in the first place, but they chose to brush it aside. That itself is highly unprofessional. With such an attitude, I'm not surprised that they did not pay out the bounty.
daghamm · a year ago
The correct procedure when they fuck up and close the report is to ask the report to be made public. Had he done this, this would have been a non issue.

The reason people don't do this is because they think they have something that can be modified into another bug. Which is exactly what happened here.

op00to · a year ago
“I will consider not disclosing if you compensate me for my time.”
jjmarr · a year ago
You can't ask for money in exchange for not revealing a bug. That's blackmail which is illegal and ethically dubious.

White hat hackers do not require companies to pay them in exchange for not revealing a bug---the reveal of a bug only happens if a company doesn't fix that bug. Companies can be jerks and refuse to pay anything. That doesn't give you the right to blackmail them---you and other security researchers can just refuse to help them in the future.

A refusal to fix the vulnerability is what happened in the original blogpost, so it was fair game for release since the company doesn't care.

Hackers that don't care about ethics or legality won't bother blackmailing companies with vulnerabilities. They'll sell or use the vulnerability to steal more important data, and blackmail companies for millions of dollars in crypto.

ldoughty · a year ago
I hate that Zendesk refused to pay out for this bug. The author made a good faith effort to report it. The author also tried to escalate it.

After they decided not to work on it, they later came back and asked him for more information and treat it like a bug...

Author should have gotten a reward. Did everything right if Zendesk claims it's not a in scope bug.

paulpauper · a year ago
That is how it works. Do nothing so that the researcher breaks the rules innadvetedly as an excuse to not pay, and then fix the problem.
m11a · a year ago
Doubtful. It's probably just incompetence, rather than malice.

The incident almost certainly cost Zendesk more in (according to the gist) lost contracts and reputational damage than it would've cost to pay the security researcher a bounty.