Reading between the lines, it looks like the story behind the story here is that this security researcher followed responsible disclosure policies and confirmed that the vulnerabilities were fixed before making this post, but never heard back anything from the company (and thus didn’t get paid, although that’s only a fair expectation if they’ve formally set expectations for paying out on stuff like this ahead of time).
I’m curious about the legal/reputational implications of this.
I personally found some embarrassing security vulnerabilities in a very high profile tech startup and followed responsible disclosure to their security team, but once I got invited to their HackerOne I saw they had only done a handful of payouts ever and they were all like $2k. I was able to do some pretty serious stuff with what I found and figured it was probably more like a $10k-$50k vuln, and I was pretty busy at the time so I just never did all the formal write up stuff they presumably wanted me to do (I had already sent them several highly detailed emails) because it wouldn’t be worth a measly $2k. Does that mean I can make a post like this?
The screenshot of the email lacks detail so I don't know what part of the DMCA the author breached here, but this feels a lot like your standard DMCA abuse.
This fits with the complete lack of care for ethics and societal awareness from Gary and Paul on down. They just want companies that can succeed by the usual amoral metrics of Silicon Valley (money). Which is entirely their right, but here is one of the social cost in a form most “hacker” founders can maybe appreciate. (As opposed to a low income resident getting evicted to make way for an illegal Airbnb)
It is irresponsible. It brings attention to an issue that has not yet been resolved, which will likely lead to users getting data stolen/scammed.
Even the most security-aware companies have a process to fix vulnerabilities, which takes time.
I would never hire someone that doesn't reaponsibly coordinate with the vendor. In most cases it's either malicious or shows a complete lack of good judgement.
I would say that it is responsible disclosure. Or anyways, not doing that is irresponsible disclosure. The corporation may be hurt by early disclosure, and that’s whatever, but very often, there are a ton of ordinary people that are collateral damage, and the only thing they did wrong was exist in a society where handing over hoards of personal data to a huge corporation is unavoidable.
So yes, anyone who discloses before the company has had a reasonable chance to fix things is indeed irresponsible.
It won't change until there is better regulation with muscular enforcement. Right now the choice is between paying an $X bug bounty and the vague possibility of some problem for not paying a bounty (e.g., someone sues you, or a PR fiasco causes you to lose customers). That basically means a choice between a 100% chance of losing $X right now (to pay the bounty) or an unknown but probably low chance of an unknown but probably high cost later on. Without any specific incentives, most people making decisions at companies will just choose to gamble on the future, hoping that they can somehow dodge the consequences.
To change that calculus, the chance of that future cost needs to go up and the amount of it also needs to go up. If the choice is between a $100k bug bounty now and a $10-million-dollar penalty for a security breach, people will bite the bullet and pay the bounty. If the CEO knows he will lose his house if its discovered that he dismissed the report and benefited financially from doing so, he will pay the bounty.
The consequences need to be shifted to the companies that play fast and loose with customer data.
> I’m curious about the legal/reputational implications of this.
The comments and headlines will be a bit snarkier, more likely to go viral - more likely to go national on a light news day, along with the human interest portion of not getting paid which everyone can relate to.
I guess I mean the legal risks to both sides. Security is only a portion of what I do and I only dabble in red teaming (this is the first time I ever tried it on a third party).
So I legitimately don’t know what the legalities of writing a “here’s how I hacked HypeCo” article are if you don’t have the express approval to write that article from HypeCo. Though in my case the company did have an established, public disclosure program that told people they wouldn’t prosecute people who follow responsible disclosure. TFA seems even murkier because Burger King never said they wouldn’t press charges under the CFAA…
Burger King is almost certainly going to experience no damage from this.
Their takeaway will likely be entirely non-existent. They’ll fix these bugs, they’ll probably implement zero changes to their internal practices, nor will they suddenly decide to spin up a bug bounty.
This sucks. As a developer who puts a lot of effort on security, I hate that companies can get away with such negligence.
I hope people invent AI bots which uncover vulnerabilities and make them available publicly for free, in real-time. This would create the right incentives for companies.
Modern software has become a giant house of cards, under the control of foreign powers who possess asymetric knowledge. This is because our overarching legal system protects mediocrity and this gives nefarious skilled people with a massive upper hand, while hurting well-intentioned skilled people who try to build software the right way.
The nefarious skilled people don't need to ask for permission and don't need to convince anyone to make money from their schemes... Well-intentioned skilled people build products which are impossible to sell or monetize because nobody cares enough about security... Companies mostly externalize the consequences of vulnerabilities to their users and leverage market monopolies to keep them.
No. Just because there's a blog post about a fixed vulnerability doesn't imply that it's ok to write a blog post about an unfixed vulnerability.
I'm not saying it's wrong to post a blog post about an unfixed vulnerability. I'm just saying that the existence of a blog post about a fixed vulnerability has no impact on whether it's ok or not to post a blog post about an unfixed vulnerability.
I'm most surprised that they have this whole system for how drive-thru interactions should go. Positive tone. Saying "you rule" like their exceedingly-irritating television commercials. Like... what if you don't? "If you don't follow the four Sales Best Practices, you're gonna be flippin' burgers for a living. Oh. Well. Oh." They're getting paid $6 an hour. The microphone/speaker system can't reproduce audio to an extent where a customer could ever be sure if you said "you rule" or that your tone is positive. They are thrilled if at least a few items they ordered are in the bag they collect. Why write software to micromanage minimum wage employees?
> They're getting paid $6 an hour. [...] Why write software to micromanage minimum wage employees?
Ironically, the less a job pays, the harsher and more demanding the bosses tend to be.
Earning six figures as a software developer, working from home, and you have to take a week off sick? No problem, take as long as you like, hope you feel better soon.
Earning minimum wage at a call centre? Missing a shift without 48 hours advance notice is an automatic disciplinary. No, we don't pay sick leave for people on a disciplinary (which is all of them). Make sure you get a doctor's note, or you're fired.
I think there's a U shaped curve here. Make it all the way to Principal software engineer and you might be expected to work longer hours and bend your personal sense of ethics in service of the company's mission.
I'm not making a value judgement. I'm saying, how are they going to punish you, as a burger flipper, for not saying their TV commercial tagline? Demote you to burger flipper? That's already your job. So why pay people to build a system to track their metrics, when they realistically have no way of making this happen.
Pay people $30/hour and I bet they'll say it every time without software yelling at them. (With the software in place, I have never heard the line "you rule" at Burger King, but I also only go like twice a year. So why write it? It doesn't work.)
It seems the post is down because of a DMCA complaint made to Cloudflare. I’m curious about the different levels of DMCA complaints. I’m sure hosting companies receive them, but what happens if I’m self-hosting and not using Cloudflare? Will my ISP or domain provider get a DMCA? Especially curious for this case.
Usually yes, it would go to your ISP. And depending on the ISP they’ll forward it to you or not. This was way more prevalent in the era where movie studios were hiring firms to send bulk DMCAs to people downloading torrents.
Back in 2008–2009, we had a lot of bare metal servers at SoftLayer's (Dallas, TX) facility. One of our customers ran a South American music forum, and anytime someone uploaded an MP3, the data center would honor the DMCA request and immediately stop routing traffic to the server until the issue was resolved. Now imagine what tools they might have in their arsenal in 2025.
There’s no liability or exposure for recording non-consensually. It’s a public space. There’s not even an edge case. If a random member if the public could walk into the drive-thru (which they can) then anything can be recorded without notification or consent.
Creating a database of recordings without user being able to know/influence is clearly violation of GDPR IF there is PII. That's going to be costly for BK.
That's what I'm getting at with the expectation of privacy part. Talking into a drive thru speaker isn't really a private activity since everyone around can kinda hear it, but it'd probably be better to disclaim it anyway since someone attempting to file on you for it still costs money.
Depends on the country. In Finland, it's ok to record your own discussions. Whether the recorder is BK (a third party) or the cashier is an interesting question, though.
Not to nitpick but being emailed a temporary password in cleartext doesn't seem like an issue to me, assuming you're required to change it as soon as you log in.
Especially since that email address presumably is used for the forgot password authentication anyway.
But it is at least the equivalent of a code smell. perhaps a "UX smell"?
A couple of obvious ways it can go bad: An attacker could potentially have access your email (perhaps from a data breach elsewhere or a password stuffing attach) and use the temp password before you do. If the temp password is the one entered by the user during signup, a naive user could sign up using their commonly-reused-password which then sits in cleartext foreven in their email archive.
https://web.archive.org/web/20250906150322/https://bobdahack...
I’m curious about the legal/reputational implications of this.
I personally found some embarrassing security vulnerabilities in a very high profile tech startup and followed responsible disclosure to their security team, but once I got invited to their HackerOne I saw they had only done a handful of payouts ever and they were all like $2k. I was able to do some pretty serious stuff with what I found and figured it was probably more like a $10k-$50k vuln, and I was pretty busy at the time so I just never did all the formal write up stuff they presumably wanted me to do (I had already sent them several highly detailed emails) because it wouldn’t be worth a measly $2k. Does that mean I can make a post like this?
The screenshot of the email lacks detail so I don't know what part of the DMCA the author breached here, but this feels a lot like your standard DMCA abuse.
This AI generated takedown was funded in part by a Y-Combinator: https://cyble.com/press/cyble-recognized-among-ai-startups-f...
Deleted Comment
Dead Comment
Branding it as “responsible” puts the thumb on the scale that somehow not coordinating with the vendor is irresponsible.
Even the most security-aware companies have a process to fix vulnerabilities, which takes time.
I would never hire someone that doesn't reaponsibly coordinate with the vendor. In most cases it's either malicious or shows a complete lack of good judgement.
In the case of bobdajrhacker? Both.
So yes, anyone who discloses before the company has had a reasonable chance to fix things is indeed irresponsible.
To change that calculus, the chance of that future cost needs to go up and the amount of it also needs to go up. If the choice is between a $100k bug bounty now and a $10-million-dollar penalty for a security breach, people will bite the bullet and pay the bounty. If the CEO knows he will lose his house if its discovered that he dismissed the report and benefited financially from doing so, he will pay the bounty.
The consequences need to be shifted to the companies that play fast and loose with customer data.
There is basically zero consequences for whatever fuckups you do, thus no incentives for companies to pay for vulnerabilities.
The comments and headlines will be a bit snarkier, more likely to go viral - more likely to go national on a light news day, along with the human interest portion of not getting paid which everyone can relate to.
Bad PR move
So I legitimately don’t know what the legalities of writing a “here’s how I hacked HypeCo” article are if you don’t have the express approval to write that article from HypeCo. Though in my case the company did have an established, public disclosure program that told people they wouldn’t prosecute people who follow responsible disclosure. TFA seems even murkier because Burger King never said they wouldn’t press charges under the CFAA…
Burger King is almost certainly going to experience no damage from this.
Their takeaway will likely be entirely non-existent. They’ll fix these bugs, they’ll probably implement zero changes to their internal practices, nor will they suddenly decide to spin up a bug bounty.
I hope people invent AI bots which uncover vulnerabilities and make them available publicly for free, in real-time. This would create the right incentives for companies.
Modern software has become a giant house of cards, under the control of foreign powers who possess asymetric knowledge. This is because our overarching legal system protects mediocrity and this gives nefarious skilled people with a massive upper hand, while hurting well-intentioned skilled people who try to build software the right way.
The nefarious skilled people don't need to ask for permission and don't need to convince anyone to make money from their schemes... Well-intentioned skilled people build products which are impossible to sell or monetize because nobody cares enough about security... Companies mostly externalize the consequences of vulnerabilities to their users and leverage market monopolies to keep them.
No. Just because there's a blog post about a fixed vulnerability doesn't imply that it's ok to write a blog post about an unfixed vulnerability.
I'm not saying it's wrong to post a blog post about an unfixed vulnerability. I'm just saying that the existence of a blog post about a fixed vulnerability has no impact on whether it's ok or not to post a blog post about an unfixed vulnerability.
Deleted Comment
Ironically, the less a job pays, the harsher and more demanding the bosses tend to be.
Earning six figures as a software developer, working from home, and you have to take a week off sick? No problem, take as long as you like, hope you feel better soon.
Earning minimum wage at a call centre? Missing a shift without 48 hours advance notice is an automatic disciplinary. No, we don't pay sick leave for people on a disciplinary (which is all of them). Make sure you get a doctor's note, or you're fired.
There is if it relegates you to shitty work environments and doesn’t afford a decent living as is generally the case in the US.
Pay people $30/hour and I bet they'll say it every time without software yelling at them. (With the software in place, I have never heard the line "you rule" at Burger King, but I also only go like twice a year. So why write it? It doesn't work.)
Dead Comment
Edit: Never mind -- > https://infosec.exchange/@bobdahacker/115158347003096276
I guess they could argue shouting into a machine in public carries no expectation of privacy, but it seems like a liability to me.
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
That is not how wiretapping laws work in every state.
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
The hilarious sarcasm throughout was the cherry on top for me.
The DCMA report was actually sent from response@cycle.com, and Cyble [1] appears to be a DCMA-takedown-as-a-service 'solution'.
[1]: https://cyble.com/
Deleted Comment
But it is at least the equivalent of a code smell. perhaps a "UX smell"?
A couple of obvious ways it can go bad: An attacker could potentially have access your email (perhaps from a data breach elsewhere or a password stuffing attach) and use the temp password before you do. If the temp password is the one entered by the user during signup, a naive user could sign up using their commonly-reused-password which then sits in cleartext foreven in their email archive.
But that's negated completely by the next part about there being a sign up without any email verification