Here's a Reuters report from June 2, which includes a link to a May 14 SEC filing:
> Cryptocurrency exchange Coinbase knew as far back as January about a customer data leak at an outsourcing company connected to a larger breach estimated to cost up to $400 million, six people familiar with the matter told Reuters.
> On May 11, 2025, Coinbase, Inc., a subsidiary of Coinbase Global, Inc. (“Coinbase” or the “Company”), received an email communication from an unknown threat actor claiming to have obtained information about certain Coinbase customer accounts, as well as internal Coinbase documentation, including materials relating to customer-service and account-management systems.
Very interesting... January 7th is when I reported it to them so that lines up. I suspect I wasn't the very first person, the person I spoke with on the phone had the confidence I wouldn't expect on the first try.
Business process outsourcing firm most likely (BPO). They get contracts for every kind of company you’ve ever heard of, lie about their cybersecurity practices, and then rebrand if they get caught.
I got rung in the UK I think a month ago from someone claiming to be from Coinbase. I told them I only had about £5 of Bitcoin cash in my account (which was true), and they immediately lost interest and said a forthcoming email would handle the matter.
They also asked if I had cold storage. I told them I had a fridge (also true).
The author got a phishing call and reported it. Coinbase likely has a deluge of phishing complaints, as criminals know their customers are vulnerable and target their customers regularly. The caller knowing account details is likely not unique in those complaints; customers accidentally leak those all the time. Some of the details the attacker knew could have been sourced from other data breaches. At the time of complaint, the company probably interpreted the report as yet another customer handling their own data poorly.
Phishing is so pervasive that I wouldn't be surprised if the author was hit by a different attack.
My first thought was someone they tied a blockchain transaction to my name and then traced it backwards. But they also knew my ETH and BTC balances, and date the account was opened. You might be able to figure out the open date by looking at the blockchain but I could never determine how they would know balances for two unrelated cryptos without some kind of coinbase compromise.
I don't know if ive gotten calls (I frequently dont answer if I dont have the contact saved), but ive gotten coinbase phasing emails for years. Its certainly not a new thing. The attacker might just be tracking the transactions on chain.
Once did some programming/networking work for a company that did the networking of a office sharing building that Coinbase was running out of. Early in my work there I noticed that the company had its admin passwords written on a whiteboard -- visible from the hallway because they had glass for walls. So I sent them an email to ask that they remove it (I billed them for it).
Their fix was to put a piece of paper over the passwords.
Its sad they call it cryptocurrency when its just dumb ass finance but with play money that idiots ascribe real value to and the old saying holds true... the rich get richer and the poor are born without assholes. I'll die happy having never participated.
Not saying it is untrue, but it is definitely true that Coinbase has never lost customer funds while operating in an environment with 0 safety nets and being one of the most lucrative targets.
This leak over customer data suggests that they should treat that with as much obsession as they do with their private keys.
That's not actually true, back in the day Coinbase used Bitfinex. They were using them when Bitfinex got all that BTC stolen. Technically everyone, including Coinbase, lost assets in that hack. They were large and scary enough at the time to force Bitfinex to keep them whole instead of applying the 36% haircut, but I'd argue that amounts to recovery rather than failure to lose in the first place. [1, 2]
GP is saying that they were already one of Coinbase's vendors (they did the networking/IT setup for Coinbase's office). Whether you'd tolerate that kind of behavior from a vendor is one thing, but for an existing vendor relationship I think adding a few billable hours for "I found this issue in your network and documented and reported it for you" to an existing contract is not particularly unreasonable.
They are lucky they just got a bill and not a terminated contract. Consulting companies I have worked for would have dropped them immediately because we don't want clients with that kind of risk. Massive red flag that signals management is non-existent, incompetent, or checked out. That is egregious negligence.
The "recordings" are of a phisher attempting to get information from the author. It proves nothing about what Coinbase knew.
The author turned the information over to Coinbase, but that doesn't prove Coinbase knew about their breach. The customer could have leaked their account details in some other way.
I sent the phone recording and emails to coinbase, and they acknowledged them saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
The recordings don't prove anything about what Coinbase knew.
I stand by my statement that the title is clickbait, as it's misleading on two fronts:
- It's the email, not the call recording that proves what Coinbase knew, but "recordings prove" sounds more sensational
- The email proves that Coinbase was aware of a sophisticated attack against a single user. You didn't have enough information to prove that there was a large scale leak of Coinbase customer data. There are sophisticated attacks against individual Coinbase users all the time due to the value of the accounts there.
It seems like you did a great job collecting info and reporting it. Still, how do you know that the info was obtained via Coinbase? Certainly they are a likely vector but you are too, and maybe there are others.
I don't know if he wrote it via AI, but he repeats himself over and over again. It could have been 1/3 the length and still conveyed the same amount of information.
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.
Not saying the article is bad, it seems pretty good. Just that there are indications
This blog post isn't human speech, it's typical AI slop. (heh, sorry.)
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
Sorry but I think you just don't know a lot about LLMs. Why did they start spamming code with emojis? It's not because that is what people actually do, something that is in the training data. It's because someone reinforcement learned the LLM to do it by asking clueless people if they prefer code with emojis.
And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.
Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.
Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
I know I shouldn’t pile on with respect to the AI Slop Signature Style, but in the hopes of helping people rein in the AI-trash-filter excesses and avoid reactions like these…
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
Interesting timeline, but nothing here proves, or even strongly indicates, that Counbase “knew about the breach” from this one report.
Screenscraping malware is fairly common, and it’s not unreasonable for an analyst to look at a report like this and assume that the customer got popped instead of them.
Customers get popped all the time, and have a tendency to blame the proximate corporation…
That's true, but in this case I got a response from the head of trust and safety after I sent the phone recording, email + email headers, saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
Not sure if the op is reading, but I also detected the same Coinbase hack around the same timeline. From what I can tell, literally everything was compromised because even their Discord channel's api keys were compromised and were finally reset around April or May. This means their central secrets manager was likely compromised too.
> Cryptocurrency exchange Coinbase knew as far back as January about a customer data leak at an outsourcing company connected to a larger breach estimated to cost up to $400 million, six people familiar with the matter told Reuters.
https://www.reuters.com/sustainability/boards-policy-regulat...
> On May 11, 2025, Coinbase, Inc., a subsidiary of Coinbase Global, Inc. (“Coinbase” or the “Company”), received an email communication from an unknown threat actor claiming to have obtained information about certain Coinbase customer accounts, as well as internal Coinbase documentation, including materials relating to customer-service and account-management systems.
https://www.sec.gov/Archives/edgar/data/1679788/000167978825...
From what I've seen, this is going to be a common subheading to a lot of these stories.
They also asked if I had cold storage. I told them I had a fridge (also true).
The author got a phishing call and reported it. Coinbase likely has a deluge of phishing complaints, as criminals know their customers are vulnerable and target their customers regularly. The caller knowing account details is likely not unique in those complaints; customers accidentally leak those all the time. Some of the details the attacker knew could have been sourced from other data breaches. At the time of complaint, the company probably interpreted the report as yet another customer handling their own data poorly.
Phishing is so pervasive that I wouldn't be surprised if the author was hit by a different attack.
There's tons of options. Malware, evil maid, shoulder surfing, email compromise, improper disposal of printouts, prior phishing attack, accidental disclosure.
Their fix was to put a piece of paper over the passwords.
What a time.
Bitcoin, and really fintech as a whole, are beyond reckless.
With Bitcoin you do not get government bailouts like what happened with the beyond reckless banks in 2008.
Even leaving your laptop unlocked for seconds in the office would have someone /pwn it in slack and get flagged by security.
If there’s one thing they took extremely seriously it was data security.
Not saying it is untrue, but it is definitely true that Coinbase has never lost customer funds while operating in an environment with 0 safety nets and being one of the most lucrative targets.
This leak over customer data suggests that they should treat that with as much obsession as they do with their private keys.
[1] https://www.kalzumeus.com/2019/10/28/tether-and-bitfinex
[2] https://x.com/nathanielpopper/status/933130228175552513
Sending unsolicited bills for unrequested services is a great way to make sure nobody takes your email seriously
The "recordings" are of a phisher attempting to get information from the author. It proves nothing about what Coinbase knew.
The author turned the information over to Coinbase, but that doesn't prove Coinbase knew about their breach. The customer could have leaked their account details in some other way.
I stand by my statement that the title is clickbait, as it's misleading on two fronts:
- It's the email, not the call recording that proves what Coinbase knew, but "recordings prove" sounds more sensational
- The email proves that Coinbase was aware of a sophisticated attack against a single user. You didn't have enough information to prove that there was a large scale leak of Coinbase customer data. There are sophisticated attacks against individual Coinbase users all the time due to the value of the accounts there.
Edit: Nevermind; I see you addressed that here:
https://news.ycombinator.com/item?id=45948808
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Not saying the article is bad, it seems pretty good. Just that there are indications
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.
Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
Deleted Comment
Dead Comment
Screenscraping malware is fairly common, and it’s not unreasonable for an analyst to look at a report like this and assume that the customer got popped instead of them.
Customers get popped all the time, and have a tendency to blame the proximate corporation…
I don't know why you think acknowledgement of your report is concrete evidence that coinbase knew about their breach months before it was disclosed.