Readit News logoReadit News
macNchz · 2 years ago
Beyond having hardware keys, this scenario is why I really try to drive home, in all of my security trainings, the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information. It's always okay to say, "actually, let me get back to you in a minute" and hang up, calling back on a known phone number from the employee directory, or communicate on different channel altogether.

Organizationally, everyone should be prepared for and encourage that kind of response as well, such that employees are never scared to say it because they're worried about a snarky/angry/aggressive response.

This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

codebje · 2 years ago
I've had a wide range of responses from people calling me when I tell them I won't give personal details out based on a cold call.

A few understand immediately and are good about it. Most have absolutely no idea why I would even be bothered about an unexpected caller asking me for personal information. A few are practically hostile about it.

None, to date, have worked for a company that has a process established for safely establishing identity of the person they're calling. None. Lengthy on-hold queues, a different person with no context, or a process that can't be suspended and resumed so the person answering the phone has no idea why I got a call in the first place.

(Yet I'll frequently get email full of information that wouldn't be given out over the phone, unencrypted, unsigned, and without any verification that the person reading it is really me.)

The organisational change required here is with the callers rather than the callees, and since it's completely about protecting the consumer rather than the vendor, it's a change that's unlikely to happen without regulation.

Terretta · 2 years ago
> None, to date, have worked for a company that has a process established for safely establishing identity of the person they're calling

What's fun here is, the moment they ask you for anything, flip the script and start to try to establish a trust identity for the caller.

Tell them you need to verify them, and then ask how they propose you do that.

Choose your own adventure from there.

wahern · 2 years ago
Anecdotally, I seem to have had the opposite experience. I've been doing this for at least 15 years, and never had a negative reaction. With bank, credit card, or finance-related companies, they seem to understand immediately. With other callers I've gotten awkward pauses, but ultimately they were politely accommodating or at least understanding that some issue would have to be processed through other channels or postponed.

However, I don't have strict requirements. When a simple callback to the support line on the card, bill, or invoice doesn't suffice--and more often than not it does, where any support agent can field the return call by pulling up the account notes--all I ask for at most is an extension or name that I can use when calling through a published number. I'll do all the leg work, and am actually a little more suspicious when given a specific number over the phone to then verify. Only in a few cases did I have to really dig deep into a website for a published number through which I could easily reach them. In most cases it suffices to call through a relatively well attested support number found in multiple pages or places[1].

I'm relatively confident that every American's Social Security number (not to mention DoB, home address, etc) exists in at least one black market database, so my only real practical concern is avoiding scammers who can't purchase the data at [black] market price, which means they're not very sophisticated. A callback to a published phone number for an otherwise trusted entity that I already do business with suffices, IMO. And if I'm not already doing business with them, or if they have no legitimate reason to know something, they're not getting anything, period.

[1] I may have even once used archive.org to verify I wasn't pulling the number off a recently hacked page, as it was particularly off the beaten path and a direct line to the department--two qualities that deserve heightened scrutiny by my estimation.

transitionnel · 2 years ago
Someone needs to standardize a simple reverse-authentication system for this.

For example whenever a caller is requesting sensitive information, they give you a temporary extension directing to them or an equal, and ask you to call the organization's public number and enter that extension. Maybe just plug the number into their app if applicable to generate a direct call.

Like other comments have mentioned, the onus should be on them. Also, they would benefit from the resultant reduction in fraud. Maybe a case study on fraud reduction savings could help speed the adoption process without having to invoke the FCC.

tortue0 · 2 years ago
I've had my cable company call me directly about an account issue and told them I couldn't validate it was them and the person got somewhat irate with my response, insisting there was no one I could call to verify them and that it has to be handled on that call. Turns out it was just a sales call (up selling a product) - which probably speaks to the level of talent they hire for that.
dataflow · 2 years ago
> this scenario is why I really try to drive home, in all of my security trainings, the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information.

The trouble is, calling the number on the back of your card requires actually taking out your card, dialing it, wading through a million menus, and waiting who-knows-how-long for someone to pick up, and hoping you're not reaching a number that'll make you go through fifteen transfers to get to the right agent. People have stuff to do, they don't want to wait around with one hand occupied waiting for a phone call to get picked up for fifteen minutes. When the alternative is just telling your information on the phone... it's only natural that people do it.

Of course it's horrible for security, I'm not saying anyone should just give information on the phone. But the reality is that people will do it anyway, because the cost of the alternative isn't necessarily negligible.

strken · 2 years ago
I say "If this is a scam call please hang up now, otherwise give me an invoice or ticket number or name and department and I'll get back to you," and they usually do hang up. The case where you need to actually call your bank is really rare.

Note that it's very important not to let them give you an actual phone number to call on. This sounds obvious but I know someone who hung up but called back on a number given by the scammers, which was of course controlled by them and not the bank.

macNchz · 2 years ago
I don't think most people who get scammed this way pause to say "oh, this might be someone stealing my credit card number", then disregard that thought because it's too much of a pain to call back on an official line. Instead I think they don't question the situation at all, or the scammer has enough information to sound sufficiently authoritative. Most non-technical people I've talked to about this are pretty scared of getting scammed, but tell me the thought never crossed their mind they could call back on a trusted number.

I like the "hang up, call back" approach because it takes individual judgment out of the equation: you're not trying to evaluate in real time whether the call is legit, or whether whatever you're being asked to share is actually sensitive. That's the vulnerable area in our brains that scammers exploit.

zamfi · 2 years ago
I think the parent poster is arguing that we should normalize this behavior not that there's no excuse for not calling the number back given the reality we have today.

You're saying it's natural for people not to want to call back and wade through a million menus, and I agree.

But the conclusion from this is that companies should change their processes so that calling back is easy, precisely because otherwise people won't do it.

And the more people that do it despite the costs, the more normalized it'll be, and the more companies will be incentivized to make it easier.

josho · 2 years ago
Great point. But it could be easily solved with something like: “Call the number on the back of your credit card. Push *5 and when prompted enter your credit card number and you will be immediately connected back to my line”
mythhabit · 2 years ago
If my bank cold calls me, I can say "I just need to verify the legitimacy of your call, so send me your direct number in the online bank app, and I'll call you". It works every time, but it also works because all the employees have a direct number.

Normally we just write message back and forth in the banking app, and if we talk it's an online meeting with video. Only for large business I go to the physical site.

tyingq · 2 years ago
>This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

There's a number of situations, not just credit card ones, where it's impossible or remarkably difficult to get back to the person that had the context of why they were calling.

Your advice holds, of course, because it's better to not be phished. But sometimes it means losing that conversation.

munk-a · 2 years ago
My mother recently started having to deal directly with utility bills and the like and this was some information we impressed very early on. You should never agree to billing or hand over CC/account information in a phone call you didn't initiate. She hasn't run into an issue yet - most utilities, online stores and other entities have call in numbers if you need to resolve a billing dispute. That random company you bought a plumbing valve from has an office somewhere with a secretary that gets a phone call maybe three times a month from customers looking to resolve issues - and Amazon has mostly centralized support for small sellers and has lines you can call to resolve any disputes you have which may forward you to the original sale party but often just resolve the issue directly.

Honestly, the worst experiences are usually with large companies that funnel all customers into massive phone centers - I've probably lost the better part of a week to Comcast over my lifetime.

macNchz · 2 years ago
Definitely, sometimes they'll have a case number or agent id you can use to get back to them, but there are cases where you have to assume if it's important to them they'll continue to nag or reach out on another channel.

I have had at least one situation where I spent a while trying to get back to a quite convincing/legitimate sounding caller this way, where, as I escalated through support people it became increasingly clear that the initial call had been a high quality scam, and not in fact a real person from the bank.

hinkley · 2 years ago
Advice I haven't even followed myself:

It's probably a good idea to program your bank's fraud number into your phone. The odds that someone hacks your bank's Contact Us page are small but not zero.

The bedrock of both PGP and .ssh/known_hosts could be restated as, "get information before anyone knows you need it".

Fraud departments contacting me about potentially fraudulent charges is always going to make me upset. Jury is still out on whether it will always trigger a rant, but the prognosis is not good.

GauntletWizard · 2 years ago
At least once I have gotten a terribly phrased and link-strewn "Fraud Alert" from a bank, reported it to said bank's anti-phisihing e-mail address, gotten a personalized mail that responded that it was in fact fraud and that they had policies against using third party subdomains like... And then found out the day later that yes, that was their real new anti-fraud tool and template.

There will need to be jail time for the idiots writing the government standards on these fraud departments before we get jail time for the idiots running these fraud departments before it gets better.

lxgr · 2 years ago
> someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

This recently happened to me, and bizarrely they wouldn’t tell me what’s actually going on on my account because of not being able to verify me. (They were also immediately asking for personal information on the outbound call, which apparently really was from them.)

OJFord · 2 years ago
Financial companies, the government, ... I always try to bother to raise the issue afterwards, but (not that I think my comments alone would do anything) so far nothing changed that I've taken issue with, I don't think.

A big one I'm aware of many others complaining about in the industry is local governments in the UK soliciting elector details via 'householdresponse.com/<councilname>' in a completely indistinguishable from phishing sort of way.

(They send you a single letter directing you to that address with 'security code part 1' and '2' in the same letter, along with inherently your postcode which is the only other identifier requested. It's an awful combination of security theatre and miseducation that scammy phishing techniques look legit.)

polygamous_bat · 2 years ago
That's the big problem, isn't it? People think it's okay to give out information on an incoming call because often it is really okay. If it were unreliable 99% of the time, phishers would not use this method as an attack vector.
hn_throwaway_99 · 2 years ago
Amen, amen, amen. IMO "Hang up, look up, call back" should basically be the only thing that security training focuses on, and it should be culturally ingrained: https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...
RulerOf · 2 years ago
> any situation where you receive a phone call (or other message) and someone starts asking for information.

I had AWS of all places do this to me a year or two ago. The rep needed me to confirm some piece of information in order to talk to me about an ongoing issue with the account. If I recall correctly, the rep wanted a nonce that had been emailed to me.

"I'm terribly sorry but I won't do that. You called me."

Ultimately turned out to be legit, but I admit I was floored.

quickthrower2 · 2 years ago
This is where some kind of chaos monkey might be good. Imagine something that randomly slacks from one human account to another asking for passwords and then the receiver has to press a "suspect message" button as a form of ongoing awareness training.

As part of that a genuine ask for a password would get the same response, and perhaps the button sends a nice message like "Looks like you have asked for a password. We get it, sometimes you need to get the job done, but please try to avoid this as it can make us insecure. Please read our security policy document here."

itsoktocry · 2 years ago
>This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

This is a policy I've implemented as well, both for myself and loved ones: don't provide any information to unverified incoming calls. Zero.

Sometimes I'll get some kind of sales call, which I may even be interested in. I'll say, proceed with the pitch to which they'll reply "first we need to confirm your identify". Then I refuse: you called me. Why do you need me to provide private information to confirm my identity?

raybb · 2 years ago
What does unverified mean in this case?
xnx · 2 years ago
I've expanded this to the general case and don't answer phone calls.
hackeraccount · 2 years ago
I've stopped listening to people. I limit myself to talking at them. The upside is that I'm never fooled. The downside is that so far as I can tell half the world hates me and the other half think I'm a lunatic.
dspillett · 2 years ago
> the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information

It really irritates me that some significant companies openly encourage customers to ignore this advice, teaching then had practise. The most recent case I know of is PayPal calling myself. It was actually thenm new cc account, I thought I'd setup auto payment but it wasn't so I was a child if days late with the first payment) but it so easily could have not been. The person on the other end seemed rather taken aback that I wouldn't discuss my account or confirm any details on a call I'd not started, and all but insisted that I couldn't hang up and call back. In the end I just said I was hanging up and if I couldn't call back than that was a them problem because at that point I had no way of telling if it was really the company or not. At that point she said she'd send a message that is could read via my account online, which did actually happen so it wasn't a scammer. But to encourage customers to perform unsafe behaviour with personal and account details is highly irresponsible.

rmbyrro · 2 years ago
> someone starts asking for information

Especially OTP codes.

I can't understand how someone works at a tech company and is clueless to the point of sharing an auth code over the phone. My grandma, sure, but a Retool employee? C'mon, haven't we all read enough of these stories?

ketzo · 2 years ago
You can't understand at all how someone with your coworker's voice might lull you into a false sense of urgency and safety?

Security is a weak-link problem, not a strong-link one. You have to plan for the least security-minded people, the tired and stressed employee.

raverbashing · 2 years ago
True, why are people so eager to pick up the phone.

The millenials are right in not picking up the phone

karussell · 2 years ago
What vendors for hardware keys would be recommended besides yubico?
alsodumb · 2 years ago
Maybe it’s just me, but I am really skeptical about the DeepFake part - it’s a theoretically possible attack vector, but the only evidence they possibly could have to support this statement would be the employees testimony. Targeting a particular employee with the voice of a specific person this employee knows requires a lot of information and insider info.

Also, I think the article spends a lot of effort trying to blame Google Authenticator and make it seems like they had the best possible defense and yet attackers managed to get through because of Googles error. Nope, not even close. They would have had hardware 2FA if they were really concerned about security. Come on guys, it’s 2023 and hardware tokens are cheap. It’s not even a consumer product where one can say that hardware tokens hinder usability. It’s a finite set of employees, who need to do MFA certain times for certain services mostly using one device. Just start using hardware keys.

dvdhsu · 2 years ago
Hi, David, founder @ Retool here. We are currently working with law enforcement, and we believe they have corroborating evidence through audio that suggests a deepfake is likely. (Put another way, law enforcement has more evidence than just the employee's testimony.)

(I wish we could blog about this one day... maybe in a few decades, hah. Learning more about the government's surveillance capabilities has been interesting.)

I agree with you on hardware 2FA tokens. We've since ordered them and will start mandating them. The purpose of this blog post is to communicate that what is traditionally considered 2FA isn't actually 2FA if you follow the default Google flow. We're certainly not making any claims that "we are the world's most secure company"; we are just making the claim that "what appears to be MFA isn't always MFA".

(I may have to delete this comment in a bit...)

hnburnsy · 2 years ago
Thanks for all this insight, this is why HN rules. What is your impression of law enforcement, everyone claims to reach out after an attack, but I've never seen follow up of sucessful law enforcement activity resulting in arrests or prosecution. Thanks again.
oldtownroad · 2 years ago
> …we believe they have corroborating evidence through audio that suggests a deepfake is likely…

Does that mean they have audio of the call?

apostacy · 2 years ago
This is an example of Google sabotaging a techology it doesn't like. I'm not saying it is a conspiracy. But by thwarting TOTP like this, Google is benefiting.

I really like TOTP. It gives me more flexibility to control keys on my end. And you can still use a Yubikey to secure your private TOTP key. But you can also choose to copy your private key to multiple hardware tokens without needing anyone's permission. Properly used, you can get most of the benefit of FIDO2 with a lot more flexibility.

I actually recently deployed TOTP, and everyone was quite happy with it. But knowing that Google is syncing private keys around by default, I no longer think we can trust it.

alsodumb · 2 years ago
Thanks for the reply! What's expecting one.

Since you might have you delete the reply anyway, can I get a candid answer on why hardware 2FA tokens weren't a part of the default workflow before the incident? Was it concerns about the cost, the recovery modes, or was it just the trust in the existing approach?

solatic · 2 years ago
One problem with hardware keys is still SaaS vendor support. There is a very narrow path for effective enforcement: require SSO, then require hardware tokens at the SSO level. But even that is difficult to truly enforce, because the IdP often has "recovery" mechanisms that grant access without a hardware key. Google is also guilty of not adding a claim to the OIDC/SAML response verifying that a hardware token was used to login, so vendors cannot be configured to decide to reject the login because it didn't use a hardware token.

If you have any vendors without SSO (like GitHub, because it's an Enterprise feature), you're lucky if they support hardware tokens (cool, GitHub does) and even luckier if their "require 2FA" option (which GitHub has, per organization) allows you to require hardware keys (which GitHub does not).

Distributing hardware keys to employees is one thing. Mandating them is quite another.

fragmede · 2 years ago
If your organization is rich enough to buy hardware keys for everybody, but too stingy to pay for GitHub Enterprise, I'm not sure what to say.
richrichardsson · 2 years ago
> unfortunately did provide the attacker one additional multi-factor authentication (MFA) code

How is this Google's fault?

Which rock was this employee living under to not have understood you NEVER give an OTP code to anyone?

nextaccountic · 2 years ago
> the only evidence they possibly could have to support this statement would be the employees testimony

I've set up my phone to record all calls. The employee could have too.

rolobio · 2 years ago
Very sophisticated attack, I would bet most people would fall for this.

I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose. I sync my TOTP between devices using an encrypted backup, even if someone got that file they could not use the codes.

FIDO2 would go a long way to help with this issue. There is no code to share over the phone. FIDO2 can also detect the domain making the request, and will not provide the correct code even it the page looks correct to a human.

bawolff · 2 years ago
> I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose.

Depends on what you think the purpose is. People talk about TOTP solving all sorts of problems, but in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.

While i also think the design decision is questionable, the gain in security from people not constantly losing their phone probably outweighs for the average person the loss of security of it all being in a cloud account (as google cloud for most people is probably one of their most well secured account)

tomatocracy · 2 years ago
It's also a reasonable defence against naive keylogging techniques - including shoulder-surfing either directly or eg via security cameras. In some places this can be a pretty big threat.
wayfinder · 2 years ago
Well all Google needed to do to make it at least a little harder is to encrypt the backup with a password at least.

The user can still put in an insecure password but uploading all your 2FA tokens to your primary email unencrypted is basically willingly putting all your eggs in one basket.

luma · 2 years ago
TOTP is helpful when you don’t fully trust the input process. If rogue javascript is grabbing creds from your page, or the client has a keylogger they don’t know about, TOTP can help.
cottsak · 2 years ago
great insight:

> in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.

Why is no one talking about this?

softfalcon · 2 years ago
>FIDO2 can also detect the domain making the request, and will not provide the correct code even it the page looks correct to a human.

I could not agree more with this sentiment! We need more of this kind of automated checking going on for users. I'm tired of seeing "just check for typo's in the URL" or "make sure it's the real site!" advice given to the average user.

People are not able to do this even when they know how to protect themselves. Humans tire easily and are often fallible. We need more tooling like FIDO2 to automate away this problem for us. I hope the adoption of it will go smoothly in years to come.

miki123211 · 2 years ago
The problem with Fido (and other such solutions, including smartphone-based passkeys) is that they make things extremely hard if you're poor / homeless / in an unsafe / violent family situation and therefore change devices often. It's mostly a non-issue for Silicon Valley tech employees working solely on their corporate laptops, and U2F is perfect for that use-case, but these concerns make MFA a non-starter for the wider population. We could neatly sidestep all of these issues with cloud-based fingerprint readers, but the privacy advocates won't ever let that happen.
adamckay · 2 years ago
> I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose

Probably so when you upgrade/lose your phone you don't otherwise lose your MFA tokens. Yes, you're meant to note down some recovery MFA codes when you first set it up, but how many "normal people" do that?

Master_Odin · 2 years ago
A number of sites I've signed up for recently have required TOTP to be setup, but did not provide back up codes at the same time. There's a lot of iffy implementations out there.
aeyes · 2 years ago
With Google Authenticator some years ago it wasn't even possible to restore your codes even if you had a local backup of the device. I'm not sure if that still is the case today but it was a common issue which we saw at our service desk before we switched to a different solution.
duderific · 2 years ago
In my company, such a communication would never come via a text, so that would be a red flag immediately. All such communications come via email, and we have pretty sophisticated vetting in place to ensure that no such "sketchy" emails even arrive in our inboxes in the first place.

Additionally, we have a program in place which periodically "baits" us with fake phishing emails, so we're constantly on the lookout for anything out of the ordinary.

I'm not sure what the punishment is for clicking on one of these links in a fake phishing email, but it's likely that you have to take the security training again, so there's a strong disincentive in place.

rainsford · 2 years ago
After initially thinking it was a good idea, I've come to disagree pretty strongly with the idea of phish baiting employees. Telling employees not to click suspicious links is fine, but taking a step further to constantly "testing" them feels like it's placing an unfair burden on the employee. As this attack makes clear, well done targeted phishing can be pretty effective and hard for every employee to detect (and you need every employee to detect it).

Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything. And has been pointed out, that seems a likely result of that sort of testing. If I get put in a penalty box for clicking on fake links from HR or IT, I'm probably going to stop clicking on real ones as well, which doesn't seem like a desirable outcome.

Guvante · 2 years ago
On the otherhand having your device die means without cloud backup you either lose access or whoever was relying on that 2FA needs to fall back on something else to authenticate you.

After all if I can bypass 2FA with my email whether 2FA is backed up to the cloud doesn't matter from a security standpoint.

Certainly I would agree with the assertion that opting out for providers of codes would be nice. Even if it is an auto populated checkbox based on the QR code.

pushcx · 2 years ago
The workaround I've seen is to issue a user two 2FAs keys, one for regular use and one to store securely as a backup. If they lose their primary key, they have the backup until a new backup can be sent to them. Using a backup may prompt partial or total restriction until a security check can be done. If they lose both, yes, there needs to be some kind of a reauth. In workplace context like this it's straightforward to design a high-quality reauth procedure.
andersa · 2 years ago
They could do what Authy does. Codes are backed up to the cloud, so you're not completely fucked if the phone is stolen. But the backup is encrypted, and to access it on a replacement device you must enter the backup password.
victor106 · 2 years ago
They could’ve just had employees use Okta Verify as opposed to Google Authenticator
rakkhi · 2 years ago
Sophisticated... ok

I mean it's a great reason to use U2F / Webauthn second factor that cannot be entered into a dodgy site

https://rakkhi.substack.com/p/how-to-make-phishing-impossibl...

gmerc · 2 years ago
Not surprised. A team at Google identified this as a vector to juice growth, submitted the metrics which now govern their PSC and didn’t add the necessary counter-metrics to measure negative effects.

That’s normal because that’s how the game is played. All the way up the chain to the org leader, there is no incentive to not do this.

rossjudson · 2 years ago
You live in a funny alternate reality. You should consider what it might be like to live in one where everyone else isn't dumber than you.

I will tell you a truth: People who think they're smarter than everyone else are generally missing important context or information.

Deleted Comment

halfcat · 2 years ago
> I sync my TOTP between devices using an encrypted backup, even if someone got that file they could not use the codes.

What do you use to accomplish this?

mos_basik · 2 years ago
Not OP, but I store my TOTP secrets along with all my other passwords in a KeePass database and sync the encrypted database to my devices with Dropbox. All the clients I use to open a KeePass database can generate TOTP codes from the secrets at this point, so I don't use a dedicated TOTP app like Google Authenticator or Authy anymore.

Not multifactor anymore, but also not vulnerable to catastrophic phone destruction or Google account banning. It is what it is.

fn-mote · 2 years ago
After the sync, you have exactly two devices that you can use to answer the MFA challenge, instead of one. It's a backup.
hn_throwaway_99 · 2 years ago
> Very sophisticated attack, I would bet most people would fall for this.

No. If you think people at your company would fall for this, then IMO you have bad security training. The simple mantra of "Hang up, lookup, call back" (https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...) would have prevented this.

Literally like 99% of social engineering attacks would be prevented this way. Seriously, make a little "hang up, look up, call back" jingle for your company. Test it frequently with phishing tests. It is possible in my opinion to make this an ingrained part of your corporate culture.

Agree that things like security keys should be in use (and given Retool's business I'm pretty shocked that they weren't), but there are other places that the "hang up, look up, call back" mantra is important, e.g. in other cases where finance people have been tricked into sending wires to fraudsters.

pvg · 2 years ago
The ineffectiveness of "security training" is precisely why TOTP is on its way out - you couldn't even train Google employees to avoid getting compromised.
rainsford · 2 years ago
But aside from beating employees over the head with it, how many companies actually operate in a way that encourages and reinforces such an approach? I'd bet it's not many, and honestly if it's a non-zero number I'd be at least a bit surprised.

You can have all the security training in the world, but every time IT or HR or whoever legitimately reaches out to an employee, especially when it's not based on something initiated by the employee, the company is training exactly the opposite behavior Krebs is suggesting. Hanging up and calling back will likely at minimum annoy the caller and inconvenience the employee. Is the company culture accepting of that, or even better are company policies and systems designed to avoid such a scenario? If a C-suite person calls you asking for some information and you hang up and call them back, are they going to congratulate you on how diligently you are following your security training?

You're not wrong that the Krebs advice would help prevent most phishing, but I'd argue it has to be an idea you design your company around, not just a matter of security training. Otherwise you're putting the burden on employees to compensate for an insecure company, often at their own cost.

yesimahuman · 2 years ago
This fails to satisfy one of the core lessons here: trust nothing, not even your own training and culture.
roywiggins · 2 years ago
They just have to catch someone half-awake, or already very stressed out, or otherwise impaired once.
devjab · 2 years ago
I’ve done 6 different versions of “security training” as well as “GDPR training” over the past few years. I think they are mostly tools to drain company money and wasting time. About the only thing I remember from any of it is when I got some GDPR answer wrong because I didn’t resize your shoe size was personal information and it made me laugh that I had failed the whatever quiz right after I had been GDPR certified by some other training tool.

If we look at the actual data, we have seen a reduction in employees who fall for phishing emails. Unfortunately we can’t really tell if it’s the training or if it’s the company story about all those million that got transferred out of the company when someone fell for a CEO phishing scam. I’m inclined to think it’s the latter considering how many people you can witness having the training videos run without sound (or anyone paying attention) when you walk around on the days of a new video.

The only way to really combat this isn’t with training and awareness it’s with better security tools. People are going to do stupid things when they are stressed out and it’s Thursday afternoon, so it’s better to make sure they at least need a MFA factor that can’t be hacked as easily as SMS, MFA spamming and so on.

Deleted Comment

Dead Comment

tptacek · 2 years ago
We use OTPs extensively at Retool: it’s how we authenticate into Google and Okta, how we authenticate into our internal VPN, and how we authenticate into our own internal instances of Retool

They should stop using OTPs. OTPs are obsolete. For the past decade, the industry has been migrating from OTPs to phishing-proof authenticators: U2F, then WebAuthn, and now Passkeys†. The entire motivation for these new 2FA schemes is that OTPs are susceptible to phishing, and it is practically impossible to prevent phishing attacks with real user populations, even (as Google discovered with internal studies) with ultra-technical user bases.

TOTP is dead. SMS is whatever "past dead" is. Whatever your system of record is for authentication (Okta, Google, what have you), it needs to require phishing-resistant authentication.

I'm not high-horsing this; until recently, it would have been complicated to do something other than TOTP with our service as well (though not internally). My only concern is the present tense in this post about OTPs, and the diagnosis of the problem this post reached. The problem here isn't software custody of secrets. It's authenticators that only authenticate one way, from the user to the service. That's the problem hardware keys fixed, and you can fix that same problem in software.

(All three are closely related, and an investment you made in U2F in 2014 would still be paying off today.)

corford · 2 years ago
For others new to WebAuthn and Passkeys (like me), worth noting that Passkeys come with important privacy/ease-of-use trade-offs (nice summary here: https://blog.passwordless.id/webauthn-vs-passkeys)

Less of an issue though once more non-platform vendors start supporting them (e.g. Bitwarden https://bitwarden.com/passwordless-passkeys/)

snagg · 2 years ago
Worth noting that implementing FIDO2/Passkeys is more challenging than it looks both from a UX standpoint and from a threat modeling standpoint. We tried to cover some of this in a blog post, in case anybody is interested: https://www.slashid.dev/blog/passkeys-security-implementatio...
unethical_ban · 2 years ago
Are there self-hosted versions of something akin to what okta does? Push notifications with a validation step that the actual user initiated the authn request?

Knowing how dead simple TOTP is technically, it's blown my mind that more companies don't host their own totp authn server.

akerl_ · 2 years ago
Most places don't host TOTP auth servers because generally you want to bundle up the whole authn/authz package. Since you need your MFA flow to be connected to your primary auth flow, having one provider for one and then self-hosting the other is generally not smooth or easy.

Push notifications are also, in my experience, a massive pain (both in terms of the user flow where you have to pull out your phone, and in terms of running infra that's wired up to send pushes to whatever device types your users have). Notably, now you need a plan for users that picked a weird smartphone (or don't have a smartphone).

The better option is to go for passwordless auth, which you could self-host with something like Authentik or Keycloak, and then it handles the full auth flow.

drx · 2 years ago
What would be your recommendation for replacing TOTP today?
akerl_ · 2 years ago
FIDO2
rahidz · 2 years ago
>The caller claimed to be one of the members of the IT team, and deepfaked our employee’s actual voice. The voice was familiar with the floor plan of the office, coworkers, and internal processes of the company.

Wow that is quite sophisticated.

oldtownroad · 2 years ago
And obviously untrue. If you’re an employee who just caused a security incident of course you’re going to make it seem as sophisticated as possible but considering Retool has hundreds of employees from all over the world, the range of accents is going to be such that any voice will sound like that of at least one employee.

Are you close enough to members of your IT team to recognise their voices but not be close enough to them to make any sort of small talk that the attacker wouldn’t be able to respond to convincingly?

If you’re an attacker who can do a convincing french accent, pick an IT employee from LinkedIn with a french name. No need to do the hard work of tracking down source audio for a deepfake when voices are the least distinguishable part of our identity.

Every story about someone being conned over the phone now includes a line about deepfakes but these exact attacks have been happening for decades.

luma · 2 years ago
Fully agreed, saying a deepfaked voice was involved without hard proof is deflecting blame by way of claiming magic was involved.
bombcar · 2 years ago
Sophisticated enough that I’d just suspect the employee unless there was additional proof.
skeaker · 2 years ago
Highly reminiscent of the sort of social engineering hacks Mitnick would run. In his autobiography he would pull this sort of thing by starting small and simply asking lower ranking employees over the phone for low risk info like their name and things like that so when it came time to call higher ranking ones he could have trustworthy-sounding info to call back to. The attack is clever for sure, but not necessarily any more sophisticated than multiple well-placed calls.
tough · 2 years ago
inside job?
dmazzoni · 2 years ago
Anything's possible, but the simplest explanation (per Occam's razor) is just that the employee was fooled.

Is it plausible that if a good social engineer cold-called a bunch of employees, they'd eventually get one to reveal some info? Yes, it happens quite frequently.

So any suggestion that it was an inside job, or used deep fakes, or something like that would require additional evidence.

Kevin Mitnick's "The Art of Deception" covers this extensively. The first few calls to employees wouldn't be attempts to actually get the secret info, it'd be to get inside lingo so that future calls would sound like they were from the inside.

For example, the article says the caller was familiar with the floor plan of the office.

The first call might be something like "Hey, I'm a new employee. Where are the IT staff, are they on our floor?" - they might learn "What do you mean, everyone's on the 2nd floor, we don't have any other floors. IT are on the other side of the elevators from us."

They hang up, and now with their next call they can pretend to be someone from IT and say something about the floor plan to sound more convincing.

mistrial9 · 2 years ago
how's that Zero Trust architecture working out for everyone ?
batmansmk · 2 years ago
Are the claims of deepfake and intimate knowledge of procedures based of the sole testimony of the employee who oopsed terribly? This is a novelisation of an events

Retool needs to revise the basic security posture. There is no point in complicated technology if the warden just gives the key away.

hn_throwaway_99 · 2 years ago
> Retool needs to revise the basic security posture.

Couldn't agree more. TBH I thought this post was an exercise in blame shifting, trying to blame Google.

> We use OTPs extensively at Retool: it’s how we authenticate into Google and Okta, how we authenticate into our internal VPN, and how we authenticate into our own internal instances of Retool. The fact that access to a Google account immediately gave access to all MFA tokens held within that account is the major reason why the attacker was able to get into our internal systems.

Google Workspace makes it very easy to set up "Advanced Protection" on accounts, in which case it requires using a hardware key as a second factor, instead of a phishable security code. Given Retool's business of hosting admin apps for lots of other companies, they should have known they'd be a prime target for something like this, and not requiring hardware keys is pretty inexcusable here.

dotty- · 2 years ago
> Google Workspace makes it very easy to set up "Advanced Protection" on accounts, in which case it requires using a hardware key as a second factor, instead of a phishable security code.

This isn't immediately actionable for every company. I agree Retool should have hardware keys given their business, but at my company with 170 users we just haven't gotten around to figuring out the distribution and adoption of hardware keys internationally. We're also a Google Workspace customer. I think it's stupid for a company like Google, the company designing these widely used security apps for millions of users, to allow for cloud syncing without allowing administrators the ability to simply turn off the feature on a managed account. Google Workspace actually lacks a lot of granular security features, something I wish they did better.

What is a company like mine meant to do here to counter this problem?

edit: changed "viable" for "immediately actionable". It's easy for Google to change their apps. Not for every company to change their practices.

dvdhsu · 2 years ago
It is not based on the sole testimony of the employee. (Sorry I can't go into more details.)
dmazzoni · 2 years ago
Employees are only human. Even smart, savvy, well-trained employees can be fooled by good social engineering every once in a while.

The key to good security is layering. Attackers should need to break through multiple layers in order to get access to critical systems.

Compromising one employee's account should have granted them only limited access. The fact that this attack enabled them to get access to all of that employee's MFA tokens sounds like indeed the right thing to focus on.

Dead Comment

brunojppb · 2 years ago
Fantastic write-up. Major props for disclosing the details of the attack in a very accessible way.

It is great that this kind of security incident post-mortem is being shared. This will help the community to level-up in many ways, specially given that its content is super accessible and not heavily leaning on tech jargon.

hn_throwaway_99 · 2 years ago
I disagree. I appreciate the level of detail, but I don't appreciate Retool trying to shift the blame to Google, and only putting a blurb in the end about using FIDO2. They should have been using hardware keys years ago.
dvdhsu · 2 years ago
Hi, I'm sorry you felt that way. "Shifting blame to Google" is absolutely not our intention, and if you have any recommendations on how to make the blog post more clear, please do let me know. (We're happy to change it so it reads less like that.)

I do agree that we should start using hardware keys (which we started last week).

The goal of this blog post was to make clear to others that Google Authenticator (through the default onboarding flow) syncs MFA codes to the cloud. This is unexpected (hence the title, "When MFA isn't MFA"), and something we think more people should be aware of.

duderific · 2 years ago
It was also a bit weird how they kept emphasizing how their on-prem installations were not affected, as if that lessens the severity somehow. It's like duh, that's the whole point of on-prem deployments.
AYBABTME · 2 years ago
To deepfake the voice of an actual employee, they would need enough recorded content of that employee's voice... and I would think someone doing admin things on their platform isn't also in DevRel with a lot of their voice uploaded online for anyone to use. So it smells like someone with close physical proximity to the company would be involved.
gabereiser · 2 years ago
There’s a lot of ways to get clips of recordings of someone’s voice. You can get that if they ever spoke at a conference or on a video. Numerous other ways I won’t list here.
themagician · 2 years ago
Probably wasn't a "deepfake" just someone decent with impressions and a $99 mixer. After compression this will be more than good enough to fool just about anyone. No deepfake is needed. Just call the person once and record a 30 second phone call. Tell them you are delivering some package and need them to confirm their address.
deepspace · 2 years ago
That is more plausible. But the embellishment about the phisher knowing the layout of the office etc makes me think it was just straight up an inside job, with the employee willingly handing over the OTP and trying to cover their tracks.
V__ · 2 years ago
One possibility would be to just call the employee and record their voice. One could pretend to be a headhunter.
skeaker · 2 years ago
This would almost certainly be it. Calling someone to record them and using their voice later to impersonate them was done even before deep-fake voices were a concept. With the tools available now, even a short call + the grainy connection of a phone voice line would be more than enough to make a simulated voice work.
rlt · 2 years ago
I'm already cautious about answering calls from unknown numbers. This could be a good reason to be even more cautious.