Summary from my read of this: (the article does a great job of couching the process of exploiting this, as well as motivating why these numbers are too low, but here are the vulnerabilities...)
- Slack chose to use a 6-hexadigit/24-bit "secret code" as the only/final code required to download "privately" shared files. That's way too short; people have botnets almost that big, such that even aggressive IP-based rate-limiting wouldn't stand a chance.
They might have also made these fairly common mistakes (which served to compound the vulnerability):
- Returning different/distinguishable error codes when the request matches correctly on some parts but not all. This allows attackers to guess each in turn.
- Considering values such as the "file ID" to provide additional security/entropy, when in fact these IDs are generated semi-sequentially, and thus a moderately-sophisticated attacker can narrow the search space dramatically.
- Considering values such as the "filename" to provide more security/entropy; however, you can make no guarantees about the length or uniqueness of filenames, so you shouldn't consider that a security feature at all.
Interestingly, I put the same bug in at HackerOne 9 months ago. It was closed as not applicable. So they had at least two independent reports of the same bug and failed to understand it, acknowledge it and then fix it.
Way to go slack.
If you have any critical data passing through slack, when you get owned, you won't be able to say say it wasn't entirely preventable.
One thing I can add from my analysis is that there aren't seperate counters for files/teams/etc. there's only one. So if a given id is used by a team, it won't be used as a file id.
The correct answer for using URLs as capabilities (which is what a 'secret URL' really is: a capability to a resource, which can be handed out, copied &c.) is to use a 256-bit value as part of the URL. Thus, rather than 'http://example.invalid/TEAM-DOC-SHORT-RAND/' use 'http://example.invalid/w6uP8Tcg6K2QR905Rms8iXTlksL6OD1KOWBxT.... If you're really paranoid, double the length. I guarantee it won't be guessed, in either case.
Yes, exactly this. It's not rocket science; it's odd how much effort Slack put into implementing (and then reimplementing in a slightly less broken fashion) a clearly wrong solution.
Don't forget that it should ideally be cryptographically random. If the sequence is predictable (like based on an auto incrementing number or on time) then you might still be able to guess a 256-bit number.
The guideline for unguessability in that TR just says to use a UUID. UUIDs are ugly, and quite long for the amount of randomness they contain. I prefer Slack's new 10*base36 solution.
> We apologize for the delayed reply. We track these issues via our internal bug system, and only reply to the reporter once the bug is resolved internally. We generally ignore messages asking for updates, as we receive a high volume of these (even for non-issues).
This rationalization is illogical, which usually means someone is in conflict. From a logical standpoint, externally, it could be they are fixing something OR don't know about it OR don't care.
Given the conflicting rationalization, I'd say they didn't know about it and then made up an excuse instead of owning it.
It probably means that they're not prioritising vulnerability reports. Which is their prerogative honestly, but it doesn't make researchers happy to work with you.
The biggest 'fault' here I think lies squarely with HackerOne.
They should've enforced their own guidelines and given me the option to publish in their system after 180 days. But I still don't have that option.
The 180 day guidance you reference falls under a "Last Resort" clause when "... the Response Team [is] unable or unwilling to provide a disclosure timeline". (which, at first glance, might not have been the case here?)
These "Last Resort" scenarios have not yet been fully codified. As a safety precaution, the workflow is still initiated manually with support as these scenarios are extremely rare and littered with edge cases. We've been learning a lot from studying disclosures like this one and you can expect to see the "Last Resort" workflow codified in the product in the future.
Now that the report has been Resolved, you should see the normal disclosure options available. Please always feel free to send me a note if you have any questions or feedback on our disclosure workflows - especially if we don't support your preferred route.
This reminds me of my experience with Imgur's private images.
A few years ago, I wrote a little js tool to browse random Imgur images by guessing their urls (i.imgur.com/<5-digit code>) until it found one that succeeded. It would add the found image to an infinite-scrolling page. It was kinda fun to browse, and a lot of people seemed to enjoy playing with it.
After a couple years, though, Imgur suddenly started blocking access to their images on my site. It turned out they were blocking based on the referrer header.
I emailed them asking what was up, and apparently they were attempting to ensure the privacy of public-url images by manually going after any tools like mine (if you google 'random imgur', you'll find dozens).
I didn't bother circumventing this, I didn't want to be a jerk just to prove a point. I did try to point out that there were a number of ways to get around something as simple as a referrer block, but I don't think the customer support person I was dealing with was really interested in discussing the issue and I let it drop.
I had a similar experience, though I was on the other side. Under brute force login attack IT guy suggested I change login HTTP method from GET to POST (which is more appropriate anyway). While I agreed with him that this is better, I pointed out that this is very easy to circumvent. However he proved me wrong - the attacks stopped after that (and I am quite sure it is not because they gained access). Not all attackers are very determined...
definitely, I'm disappointed with slacks responses. We did a trial and have had some correspondence with their support team which has been excellent to date. So I assumed they were above some of this silicon valley elitism. I'm glad to see this kind of public disclosure. We have been a customer since that initial trial, we stopped using hipchat.
To be fair, most of the bad correspondence was from 2014. Their new representative 'Leigh' appears to be doing excellent work.
Also we're still happy users of Slack, I would just never trust them with secrets :-).
If I were responsible for security at Slack, the thought of potentially leaking uploaded files like this would keep me up at night. Slack has gained such wide adoption-think of the things that people are sharing with their coworkers all day, every day. Someone with ill intent could have found so many valuable things.
- Slack chose to use a 6-hexadigit/24-bit "secret code" as the only/final code required to download "privately" shared files. That's way too short; people have botnets almost that big, such that even aggressive IP-based rate-limiting wouldn't stand a chance.
They might have also made these fairly common mistakes (which served to compound the vulnerability):
- Returning different/distinguishable error codes when the request matches correctly on some parts but not all. This allows attackers to guess each in turn.
- Considering values such as the "file ID" to provide additional security/entropy, when in fact these IDs are generated semi-sequentially, and thus a moderately-sophisticated attacker can narrow the search space dramatically.
- Considering values such as the "filename" to provide more security/entropy; however, you can make no guarantees about the length or uniqueness of filenames, so you shouldn't consider that a security feature at all.
Way to go slack.
If you have any critical data passing through slack, when you get owned, you won't be able to say say it wasn't entirely preventable.
I thought that went without saying, nut yes, it must be cryptographically random (not ideally—it must be).
This rationalization is illogical, which usually means someone is in conflict. From a logical standpoint, externally, it could be they are fixing something OR don't know about it OR don't care.
Given the conflicting rationalization, I'd say they didn't know about it and then made up an excuse instead of owning it.
The biggest 'fault' here I think lies squarely with HackerOne. They should've enforced their own guidelines and given me the option to publish in their system after 180 days. But I still don't have that option.
The 180 day guidance you reference falls under a "Last Resort" clause when "... the Response Team [is] unable or unwilling to provide a disclosure timeline". (which, at first glance, might not have been the case here?)
These "Last Resort" scenarios have not yet been fully codified. As a safety precaution, the workflow is still initiated manually with support as these scenarios are extremely rare and littered with edge cases. We've been learning a lot from studying disclosures like this one and you can expect to see the "Last Resort" workflow codified in the product in the future.
Now that the report has been Resolved, you should see the normal disclosure options available. Please always feel free to send me a note if you have any questions or feedback on our disclosure workflows - especially if we don't support your preferred route.
A few years ago, I wrote a little js tool to browse random Imgur images by guessing their urls (i.imgur.com/<5-digit code>) until it found one that succeeded. It would add the found image to an infinite-scrolling page. It was kinda fun to browse, and a lot of people seemed to enjoy playing with it.
After a couple years, though, Imgur suddenly started blocking access to their images on my site. It turned out they were blocking based on the referrer header.
I emailed them asking what was up, and apparently they were attempting to ensure the privacy of public-url images by manually going after any tools like mine (if you google 'random imgur', you'll find dozens).
I didn't bother circumventing this, I didn't want to be a jerk just to prove a point. I did try to point out that there were a number of ways to get around something as simple as a referrer block, but I don't think the customer support person I was dealing with was really interested in discussing the issue and I let it drop.