AT&T has 110 million customers. Let's be optimistic and assume that each customer only has to spend one minute of extra time managing their account due to the break-in. That is more than 209 years of lost time.
Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.
It surprises me that there isn't a single comment pointing out that corporations like AT&T don't collect all that data for fun. This actually costs them a lot of money, but they're legally required by the government. While everyone is blaming the company, did you not take a second and contemplate how weird it is that you're fine with the government (and now everyone else es well) getting a record of all your phone activity? I'm old, back in my youth we'd have referred to that as a dystopian surveillance state.
There's no federal law requiring AT&T to hold onto this data.
There's possibly a FISA court requirement (too secret to reveal), but AT&T has long been an exceedingly willing part of the gov's spying apparatus. It fed these records and Internet data to the feds without any court order, and only escaped legal troubles when Obama, contrary to his campaign promises, gave AT&T, Verizon and more retroactive immunity
Being required to do something doesn't justify doing it poorly. AT&T brought in over $3 billion with a B of profit with a P in Q1 2024. They have more than enough money to secure their systems. They're not struggling. In March of this year they bought back 157M of their stock. They could have instead put that money towards security, but they didn't: they put it towards enriching shareholders.
Banks are required to maintain financial transaction records.
Is the argument that governments don't have a good reason to mandate record collection?
Why can't I ask my government to keep me safe from terrorists but also expect that companies will not just be careless with the data they collect as part of that?
Pish posh. They also sell that data at an increidble markup – and without the knowledge of their customers – to anyone who'll pay, including governments and their cutouts.
I've never heard of this, and cursory web searches don't seem to be turning up anything relevant (although that's admittedly not saying much with the state of search lately). Can you explain how the law requires this level of data retention?
"legally required by the government" to keep securely. If you can't keep to the rules don't play the game. I'm sure any other telecom would be glad to get the market share.
That's a good point. Had they valued the citizens' privacy they would have done the opposite, that is make it illegal for network providers to store customer data that is not essential for them providing the services. But I guess creating a dystopian surveillance state is more of a priority.
Sure - pretty well every corporation you purchase a service from is required to store your credit card information as well. But there are stiff penalties from the government and credit card processors for unauthorized access to that information; consequently, it's rarely stolen.
Your address, cell metadata, phone number, email address, and passwords are leaked pretty well contsantly though.
It's not that corporations are incompetent. The laws and regulations mean it's not worth the cost to treat your personal information with any real respect.
do yourself a favor and accept that phone records have never not been recorded and the data is mostly available for purchase. the company is to blame because they are complicit or negligent in the bespoke surveillance state, probably both.
welcome to a post 9/11 world. privacy has been dying for a long time. the general population doesn't care anymore. they freely give up everything to big tech anyways.
> how weird it is that you're fine with the government getting a record of all your phone activity
I don't like it, but accept it as the lesser evil. I'm from Europe and I believe the number of reported prevented terror attacks. The agencies need data access for that. Not good, but necessary.
But are you aware that Meta, Google, Apple, MS, etc. collect every kind of information about every user of Android, iPhone or WhatsApp, Insta, Facebook, Windows? Phone manufacturer, huge apps like TicToc as well. The kind and size of that data is crazy beyond imagination. I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage. Are you really that naive and think that Meta doesn't analyse their gigantic data lake including billions of WhatsApp messages to predict the results of elections? That is the real danger to democracy.
Hurting the shareholder is the only option to actually fix anything. Until the C-suite and board are forced to face the music caused by rich people being parted from their money, they'll just continue patting themselves on the back and giving themselves bonuses.
If bankruptcy can clear liabilities then your suggestion won't help. The shareholders are usually gone by the time the bill comes due: it's often cheaper to go bankrupt. And there's a whole private equity industry revolving around taking dirty liabilities and slowly bankrupting a company to squeeze the last dollar out before shutting down.
Look at the same problem with environmental disasters that were created by corporations. The problem with security liabilities is similar? Externalities are hard to get shareholders to pay for.
Shareholders can vote and decide the direction of a company. They should also be held liable for any problems the company causes.
If the company is fined it should come out of company and then shareholder pockets. I might even add courts should be able to award damages by directly fining share holders.
If a company does something severely illegal then very large shareholders should risk jail time.
It’s your company after all as a shareholder. You own it.
It’s no different if your dog bites someone or child breaks the law. You have to pay the fines.
The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.
Maybe a reasonable first step is third-party standards, audits, and certifications around data security to make privacy- and security-conscious consumers aware of what a company is doing. If consumers really find value in that, then they will preferentially deal with that company, and other companies will follow suit.
> The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.
This isn't what's being suggested.
Higher ups set the incentive structures that result in dwindling security resources.
If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.
This reminds me of the story where someone accidentally deletes the database and there are no backups. Who's at fault? The individual IT employee who made a mistake, or the entire organization (especially leaders) who created a situation where one person could delete the database and there are no backups?
My read of responsible people are corporate officers and executives--people who actually choose what to work on and are substantially rewarded by the corporation.
this is already an established principle in other engineering fields. If a civil engineer screws up and a building collapses, both that engineer and the engineering firm are liable.
Why should the software industry be any different?
AT&T bought back a ton of shares of its own stock in March. It's likely that shareholders won't feel the effect of this security breach because of those buybacks (over a medium term time window).
How about instead of even more meaningless standards without teeth that don't affect the people pushing for profits over essentials like security, regulators impose punishments that actually affect the investors that ultimately create these perverse incentives in the first place? Nobody should be profiting off of a company that does wrong by over a hundred million people.
Direct liability to the front line / middle management which is cleared in exchange for defined levels of cooperation with criminal, regulatory, and civil investigations aimed at landing higher-ups would be a useful development.
Nonsense. The people who should hold responsibility are the people who have decision-making power and derive financial benefit from these choices. A rank-and-file employee is a scapegoat given the incentives at play in the system, even if they nominally wrote the vulnerable code
The law that would have prevented this breach would be to make it illegal for telcos to sell customer data. The reason AT&T was feeding ALL the data to Snowflake was to sell their customer's location and social graph to marketers. It is unconscionable to me that this in not currently the law.
Imagine a world where suffering a data breach meant you could no longer collect, let alone hold or sell that class of data for a decade, and this rule preempted laws that required data gathering.
AT&T would be nearly equivalent to an E2E service overnight.
The lines wouldn’t be encrypted, so the NSA would still tap them, but at least there would be zero mutable storage in the AT&T data centers (except boot drives, SMS message queues, and a mapping between authorized sims and phone numbers).
In this day and age, why do they even maintain call records? They don’t need them for billing purposes, which was the original purpose of keeping them.
Genuinely chonky fines seems to be the answer to this problem, as it aligns incentives with rewards/penalties (if you’re lax about how your company approaches user data then you’ll be at financial risk).
Piercing the veil to prosecute those “responsible” seems like it would just incentivise the business to carry on as normal but with employees that are contractually designated (i.e. forced) to be fall guys if anything goes wrong.
If PG&E has taught us anything, utility companies can literally blow up and burn down cities and no amount of fines or paying for the damages done will matter to them.
Monopolies can always just pass the cost of the fine to their customers.
That is the worst case outcome of penalties, and it carries significant risk of whistle blowing. The default case will be compliance, because compliance is simply cost of business, something businesses understand well.
Meanwhile, currently businesses are doing shit all about data breaches except handing out the absolutely useless "2 years identity monitoring", so from a consumer view it really can't get much worse.
In general, the idea that penalties make people hide their bad behavior, so we shouldn't penalize bad behavior, is just extremely misguided. Because without penalties, we normalize bad behavior.
Make laws that protect whistleblowers from civil and legal penalties, punish those who attempt to illegally hide data breaches, including jail time in the worst cases. That would solve it. Individual employees don't care enough to hide it (they just work there), and leadership wouldn't dare risk a whistleblower which would cause them to face criminal penalties.
So you make it a crime to hide the existence of a data breach for more than X amount of time for the purpose of figuring out exactly what happened. I don't know off the top of my head how long X should be. 30 days? 60?
Which should result in even larger penalties, hopefully those penalties can also be levied against the individuals that were associated with hiding the data breaches. Mid level manager that gets an email from Snowflake saying that there's been unusual activity who then hides that information or doesn't look into it? Fine 'em (and AT&T). Mid level manager tells a random engineer that DOES look into it and finds that they've been hacked but hides it? Fine AT&T and this person even more!
Nothing happened to Experian, and those clowns have beaches every year. The USA has so far proved that we don't care about privacy and don't believe data is real.
The AT&T app and website are so bad it takes way longer than 1 minute to log in to e.g. pay your bill. The United States needs to raise the bar for large-cap negligent operators and fine the company enough to make shareholders listen.
In approximately 100% of cases, if your intuition is to say "this company is too large should be fined/regulated more," what you should actually say is "this company is too large and should be broken into many smaller entities."
The correct way is to follow what all other engineering and trade (medicine/law) already follow.
Some software engineers are licensed. A company must hire these software engineers, and any changes to what data is saved or how is saved must be signed by these engineers. If a breach occurs, an investigation occurs and if these licensed software engineers are found to be negligent, they lose their license. If they are found to be at fault, they get criminal penalties.
This, of course, must be coupled with penalties for management personals as well.
This kind of system has consistent led to regulatory capture by the licensed industry. Even the mechanism of operation de facto assumes a significant gatekeeping barrier to getting a license, since otherwise companies would just pick one most willing to cut corners to save costs, or pay the license fee to get greenhorns certified because that costs less than adding two years to the development schedule to do it well. Making everything cost quadratically more than it already does is not a good solution.
What you want here is for them not to be holding the data to begin with. The solution to which is to just let customers sue them. Not for $0.30 and "free credit monitoring" but for actual money. Then companies can choose whether they want to mitigate their risk by doing actual security or by not storing the data to begin with, but most likely the second one is their better option.
Up until recently I agreed with this position because I, like you, thought that this was how licensed engineering disciplines worked. I thought that if you sign off on something you put your career on the line, making the potential penalty for signing off on bad designs worse than the one for saying no to a pushy boss.
Then the MAX crashes happened and Boeing is about to negotiate a sweetheart plea deal and there's absolutely zero talk of any of the engineering licenses that were used to sign off on the bad systems getting revoked.
If the licensing system doesn't actually include a threat of career-ending penalties for knowingly signing off on bad designs, or if the system allows executives to bypass engineer signatures, then it seems like the general consensus on here is right: it's useless overhead at best and regulatory capture at worst.
If you're going to do that, you're going to need to get universities to treat computers as an actual applied discipline. Physical engineers at least get some practice working with numbers around real materials.
I've met too many recent university graduates that don't even know you need to sanitize database inputs. Which, not their fault, but the university system as it currently exists in relation to software is not set up do do the thing you're asking.
The alternative is to have a really long exam (or a series of them like actuaries do?). Here are 10 random architectures. Describe the security flaws of each and what you would change to mitigate them.
The other change that needs to be made, is that engineers need to be able to describe the bounds of their software. This happens in the other engineering disciplines. A civil engineer can design a bridge with weight capacity X, maybe a pedestrian bridge. If someone builds it and drives semi-trucks over it, that's kinda their problem (and liability).
We would need some sort of way to say "this code is rated for use on an internal network or local only" and, given that rating, hooking it up to the open internet would be legally hazardous.
You want a P.Eng (or equivalent) to sign off on anything that involves data? That won’t solve the problem but will dramatically slow down the pace of innovation. And all the while, it will funnel money further into regulated professions instead of into actually securing software.
This is precisely how we end up in a world where we’re all running twenty five year old software.
Who is ultimately responsible, though when data is stolen in this fashion? The analyst who ETL'd this to Snowflake without MFA enabled? Or maybe the employee who inadvertently installed a data sniffer that captured usernames and passwords? Really want to send your coworkers to jail for falling for a phishing attack?
If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?
Why not advocate for more resources to capture and try the actual criminals? Or, as elsewhere in this thread, simply make this kind of data collection illegal?
> If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?
If it means that privacy and safety is actually respected then yes. Working in an environment with "exceedingly strict" regulatory oversight would be a reassurance that observed violations will be dealt with in a timely fashion instead of put in the backlog and never addressed.
> Why not advocate for more resources to capture and try the actual criminals?
Yes, why not? While we're at it, let's try and capture the easily-spotted criminals who perform the most trivial of attacks to servers. Just open up your SSH server logs and start going after and preventing the fecktons of log spam that hide real attacks.
> Or, as elsewhere in this thread, simply make this kind of data collection illegal?
Making something illegal is great! Unfortunately it doesn't really do anything to help people after it's been stolen a second time (first time was by AT&T if it were illegal).
At&t is up there with defense contractors with how intertwined their businesses are with the DoD. They're basically an extension of the intelligence agencies here in the US. They don't have consequences, much like Boeing.
Personal data cannot be secured. The only way is to not store it. That will (imaginationaly) cost companies in lost revenue for being unable to mine and sell it. Only government can make laws against a company taking your personal information and selling it. Even passwords shouldn't be stored by a company.
The years of lost time argument is disingenuous. Over that number of people, 209 years of lost time from 700 million years of lives is nothing.
There are lots of companies that take security seriously and don’t lose their customers data. Which is good, because there are companies that need to hold customer data.
Companies that don’t take security seriously and lose peoples data should be punished accordingly.
Companies that sell customers data should be identified.
But if we treat them all the same, then we let the bad companies off the hook, and punish the responsible companies unfairly.
I’d take it a step further. If a technology is impossible to secure it shouldn’t be used. Maybe it’s time to rethink all the parts of our lives we’ve handed over to software.
Alternatively, we need sharper teeth around the consequences of this data breath.
Why are we using SMS for 2FA everywhere? Why does AT&T have to have residential addresses and KYC for all of its customers? These are the things that should be banned. The government official that mandated all this crap should be forced to sleep with scorpions for 9 years and stink bugs for 3 more years.
Exactly. There is currently no meaningful penalty when a company fails to protect private data or violates its own privacy policies, so of course they continue to do these things because each either makes them more money or costs them less money.
Prison time being on the table for officers of the corporation is the only thing that will change this behavior.
But hey, in 5-7 years there will be a settlement to the inevitable class action lawsuit and each of these customers (that fills in a form, ensuring only a small fraction actually do) gets a $3.75 credit on their next bill. The lawyers will get 30% of the settlement and each walk away with several million dollars. Justice! chef’s kiss
If we go with the logic of the grandparent comment, where were can measure the harm by adding up a minute of time wasted across millions of people to get a big amortized number, it seems commensurate that each of those people can be compensated for their minute of wasted time with a few dollars.
Everyone says what needs to happen. Every thread has this same exact post. We all know what needs to happen. How _would_ this ever happen? This is a board of innovators -- innovate!
Yeah, you're right. Data breaches are essentially just slaps on the wrist to companies like AT&T. Maybe it's possible to fine them based on the proportion of the userbase that was affected and the profits they generated for a certain time period.
I wonder if this will push companies to stop using external vendors to store and process data. If companies stored all of their info in house, it would prevent the case where compromising one vendor compromises everyone's data. But it would also mean that each individual company needs to do a good job securing their data, which seems like a tall ask.
I propose that the fines should be based on what the data would be sold for on a dark web forum. These breaches should be exponentially more expensive, which would incentivize companies to retain less sensitive data.
The breach here was not against AT&T but against a cloud computing company called Snowflake.
Cloud computing companies, so-called "tech" companies, and the people who work for them, including many HN commenters, advise the public to store data "in the cloud". They encourage the public, whether companies or individuals, to store their data on someone else's computer that is connected to the open internet 24/7 instead of their own, nevermind offline storage media.
Countless times in HN threads readers are assured by commenters that storing data on someone else's computer is a good idea because "cloud" and "_____ as a service". Silicon Valley VC marketing BS.
"Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible."
Piercing the veil refers to piercing limited liability, i.e., financial liability. Piercing the veil for crimes is relatively rare. Contract or tort claims are the most common causes of action where it is permitted.
There is generally no such thing as "criminal negligence" under US law. Negligence is generally a tort.
As for fines, if there were a statute imposing them, how high would these need to be to make Amazon, Google, Microsoft or Apple employees and shareholders face "real consequences".
Is it negligent for AT&T to decide to give data to a cloud computing company such as Snowflake? HN commenters will relentlessly claim that storing data on someone else's computers that are online 24/7 as a "service", so-called cloud computing, is a sensible choice.
Data centers are an environmental hazard in a time when the environment is becoming less habitable, they are grossly diminishing supplies of clean water when it is becoming scarce, and these so-called "tech" companies are building them anyway.
Data centers are needed so the world can have more data breaches. Enjoy.
>The breach here was not against AT&T but against a cloud computing company called Snowflake.
It wasn't really a Snowflake breach, if it's like the other Snowflake data leaks, AT&T didn't set up MFA for a privileged account and someone got in with a password compromised by other means. For smaller companies I'd be willing to put more blame on Snowflake for not requiring MFA, but AT&T is large enough to have their own security team that should know what they are doing.
This is yet another wakeup call for all companies - passwords are not secure by themselves because there are so many ways for passwords to be leaked. Even though SMS MFA is weak, it's far better than a password alone.
If it helps to understand the comment, change the word "breach" to "unintended redistribution of data".
The comment is about the risk created by transferring data to a third party for online storage.
It is not about the specific details of how data is obtained by unauthorised recipients from the third party.
The act of storing data with third parties who keep it online 24/7 creates risk.
Obviously, the third parties will claim there is no risk as long as ["security"] is followed
If we have a historical record that shows there will always be some deficiency in following ["security"], for whatever reasons,^1 then we can conclude that using the third parties inherently creates risk.
1. HN commenters who focus on the reasons are missing the point of the comment or trying to change the subject.
If customer X gives data to party A because A needs the data to perform what customer has contracted A to do, and then party A gives the data to party B, now customer X needs to worry about both A _and_ B following ["security"]. X should only need to trust A but now X needs to trust B, too. If the data is further transferred to third parties C and D, then there is even more risk. Only A needs the data to perform its obligation to customer X. B, C and D have no obligations to X. To be sure, X may not even know that B, C and D have X's data.
A good analogy is a non-disclosure agreement. If it allows the recipient to share the information with third parties, then the disclosing party needs to be concerned about whether the recipient has a suitable NDA with each third party and will enforce it. Maybe the disclosing party prohibits such sharing or requires that the recipient obtain permission before it can disclose to other parties.^2 If the recipient allows the information to be shared with unknown third parties, then that creates more risk.
2. Would AT&T customers have consented to their call records being shared with Snowflake. The people behind so-called "tech" companies like Snowflake know that AT&T customers have no say in the matter.
> Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.
I really dislike this attitude.
AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T, and to the extent you're right, then I agree that they should be fined in a bigger manner.
But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.
Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it. But that's exactly the situation these organizations are in - anyone on Earth can attack them, very much including basically armies. We cannot expect organizations to be able to defend themselves forever, it is an impossible ask in the long run. This has to be solved by the equivalent of a standing army protecting a country, and by going after the criminals who do these breaches.
No, the root-cause is not AT&T were "attacked, by criminals"; there's a much wider issue involving Snowflake and multiple customers. The full facts are not in yet.
AT&T's data was compromised as one of Snowflake's many customer breaches (Ticketmaster/LiveNation, LendingTree, Advance Auto Parts, Santander Bank, AT&T, probably others [0][1]), which occurred and were notified in 4/2024 (EDIT: some reports says as far back as 10/2023). Supposedly these happened because Snowflake made it impossible to mandate MFA; some customers had credentials stolen by info-stealing malware or obtained from previous data breaches.
Snowflake called it a “targeted campaign directed at users with single-factor authentication”.
The Mandiant report tried to blame unnamed Snowflake employee (solutions engineer) for exposing their credentials.
How much responsibility Snowflake had, vs its clients, is not clear (for example, seems they only notified all other customers May 23, not immediately when they suspected the first compromise). Reducing the analysis to pure "victims" and "criminals" is not accurate. When you say "criminally prosecute those whose negligence made this possible", it wouldn't make sense to prosecute all of Snowflake's clients but not Snowflake too. Or only the cybercriminals but not Snowflake or its clients.
I think the implicit assumption is that the vast majority of these breaches are obviously preventable (basic incompetence like leaving a non-password-protected database connected to the public internet is common).
A better analogy is not a bank defending against an army, but a bank forgetting to install doors, locks, cameras, or guards. _Yes_, the criminals are the root cause, but human nature being what it is it's negligent to leave a giant pile of money and data completely unprotected.
In this analysis, the effort the bank puts towards defending themselves is relevant. We wouldn't blame the bank for an army attacking them, but if they left the door unlocked and the neighbours kids made off with your money you very rightly would feel differently.
If a breach is so inevitable like you say, then it's negligent to store the information in the first place. They're accumulating and organizing data with the inescapable conclusion of handing it out to criminal organizations.
You picked the wrong point to counter with. The real problem is that the corporate decision-makers who bear the most responsibility will never be held accountable. They will always be able to shift blame to someone below them in the corporate hierarchy.
No way. If I were running a small MSP, I was breached, and my customers were infected I'd be sued out of business immediately. The fact that they are a titan means they should be that much more vigilant.
Companies could also stop storing customer information for purposes unrelated to the core product that you are using..... But that's not going to happen because it's still far more profitable to mine customers data even with the risk of theft or breach.
<< AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T,
I am sure LEOs will do what they are paid to do and catch criminals. In the meantime, I would like to focus on service provider not being able to provide a reasonable level of privacy.
I am blaming a corporation, because for most of us here it is an ongoing, recurring pattern that we have recognized and corporations effectively codified into simple deflection strategy.
Do I assume the corporation messed up? Yes. But even if I didn't, there is a fair amount of historical evidence suggesting that security was not a priority.
<< Put another way - you trust a classical bank, with a money, to secure your money from criminals.
Honestly, if average person saw how some of those decisions are made, I don't think a sane person would.
<< But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.
Ahh, yes. Poor corporation has become too big of a target. Can you guess my solution to that? Yes, smaller corporation with MUCH smaller customer base and footprint so that even if the criminal element manages to squeeze through those defenses that the corporation made such a high priority ( so high ), the impact will be sufficiently minimal.
I have argued for this before. We need to make hoarding data a liability. This is the only way to make this insanity stop.
"still-unfolding data breach involving more than 160 customers of the cloud data provider Snowflake.'
So what is Snowflake normally doing with all that AT&T data? Redistributing it to "marketing partners"? Apparently. Snowflake's mission statement, from their web site:
"Our mission is to break down data silos, overcome complexity and enable secure data collaboration between publishers, advertisers and the essential technologies that support them."
So this was not, apparently, a break-in to the operational side of AT&T. Someone unauthorized got hold of data they were already selling to marketers. Is that correct?
This would probably be no different if someone like Salesforce had a breach and a large customer of theirs being impacted. There are large companies using SaaS services for a chunks of their back office stuff.
Its not just a bad password, it was a password that was exposed to a info stealer in some way. It might of been reused or overshared into some system that got exposed. From what I understand someone got a huge info stealer dump and started putting two and two together and noticed all these scraped passwords and tried them on snowflake
It's not "internal analytics", because a) 90% of the data was former customers and b) it has location data but timestamps were removed, so it's social-graph information plus location. Start asking yourself what sorts of end-users want to pay for the entire social-graph of 77m, regardless whether those customers never make a phone call again.
"Alternate credit scoring, hyper-targeted marketing and more... an emerging trend of companies building partnerships with telecoms to power use cases across multiple industries." was the blurb for the unit Snowflake specially set up for Telco data in early 2023 touting "location data", but this product is not aimed at the telco's use-case; coincidentally this was also around the time Snowflake was touting integration with GenAI.
(It's not "competitor analysis" either, because if it was they would have obscured the 68m former phone numbers to prevent abuse by direct-marketing.)
Reading the articles about this breach and the nature of the data in this Snowflake lake, I personally wouldn’t consider this breach a “leak” from the customer perspective - to me the leak is upstream of this breach.
Given the nature of the data in the database and the platform it was stored in, it seems extremely likely this data was not meant to be used internally by AT&T but was instead meant to be used externally by either a 3rd party partner (like advertisers and consumer analytics partners) or a government agency.
In other words, if it were my data in this datastore, I’d consider my data as already having been “leaked” when it went into the store - the issue here appears to be that this data was “leaked” to the wrong people from the perspective of AT&T and the FBI.
That's the issue with dragnet data collection and Snowflake-esque databases - it's never safe to enter any personal information on the internet. Given enough time, any and all of it will be "shared" and used for a third party's financial/political gain.
Doesn't matter if it's AT&T, a bank, or the government. Never under any circumstances can you expect anything sensitive to stay private. This used to be taught as gospel when introducing kids to the internet - it's crazy how much things have changed in 20 years.
I wonder how many times Snowflake has openly transmitted CP from ATT customers because they are too hungry to ingest and sell data rather than verify it.
AT&T stock has already bounced back from much of the initial -2.6% drop this morning, so the market thinks AT&T is immune. Meanwhile Snowflake is -3.9% down (they have many other customers than AT&T).
I never got the impression that the market ever cares about data breaches. It seems most companies are rarely held financially responsible for data breaches anyway.
I would bet any effects you’re seeing in stocks is unrelated to this news.
This is precisely why breaches keep happening and will keep happening. It cost money to implement security. There's no cost benefit to spending that time and money since there are no consequences.
Businesses do not spend money unless it will make them money or save them money.
There needs to be a hefty federal fine on a per-affected-user basis for data breaches. Also a federal fine for each day a breach is unreported.
That money should go into a pool which can be accessed by people who have their identity stolen.
They are very much related to the news, that's precisely why I linked to the stock charts: AT&T was flat overnight but opened (9am ET) with a -2.6% spike down, but has been recovering since. Their press release appears to have been Friday 7am ET shortly before market open [https://about.att.com/story/2024/addressing-illegal-download...].
The market correctly does not care because there is no consequence for the current or prior executives and no financial consequence for the company. All they will do is send out some obligatory notices, mention it in their investor relation materials, maybe offer a year of credit score monitoring, and move on.
We need regulations with massive fines, class action lawsuits (a ban on arbitration clauses), and maybe automatic minimum level compensation to those customers.
I think they will care a lot more when it directly impacts them. If all their text conversations were publicly available that would cause some outrage.
> I never got the impression that the market ever cares about data breaches. It seems most companies are rarely held financially responsible for data breaches anyway.
This might also explain why there's little visible effect on other cloud database services either. After all, the attack is pretty simple and potentially affects any cloud database that allows access from the Internet.
I'm certainly not going to defend negligence of data protection but it's extremely difficult to cost as a liability (naively, you might even consider it not a liability at all) without government oversight.
My reading is that the market thinks Snowflake takes the majority of the blame, and the content of the linked article seemed to suggest as much despite having only AT&T in the headline.
Well it’s as if you put your data in Salesforce and Salesforce got breached… maybe you’re bad at picking vendors but the real loss of trust would be on Salesforce.
In this case, Snowflake was also the cause for the Ticketmaster and Lending Tree breaches according to the article so…
its not an expensive problem and customers aren't going to go anywhere else
class action lawsuit just going to result in everyone’s $2 being given as a free trial of a ringtone addon from the early 2000s that converts into more recurring revenue
Over in Europe this blanket saving of phone records beyond what it is necessary to operate would have been illegal in many countries, and is in general incompatible with the European Convention for the Protection of Human Rights and Fundamental Freedoms outside of active threats to national security and temporary measures overseen by a court.[1]
There's really no reason why any service providers should save this stuff in the first place, and it isn't hard to fix with legislation. Just make it illegal to even keep.
> Over in Europe this blanket saving of phone records beyond what it is necessary to operate would have been illegal in many countries,
On the contrary, many European countries have mandatory data retention periods that meet or exceed the 6 months of records that were supposedly included in this breech.
Germany has one of the shorter retention periods at 10 weeks, but they still have to keep those records.
Saying that it would be illegal to collect these records in Europe is patently false, and furthermore the record collection is generally mandated for a period of time that depends on the country.
> There's really no reason why any service providers should save this stuff in the first place,
Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.
> Germany has one of the shorter retention periods at 10 weeks, but they still have to keep those records.
No they don't, because it's "suspended" by the federal network agency until courts are through with it. In fact they suspended it three days before the law would've come into force and thus it never was. The current state of affairs is this: the retention was ruled incompatible with German and European law in an injunction and it does not look like that is about to change.
There's a similar picture in many EU countries: There's a law on the books, but it can't be enforced/is being challenged/was already invalidated/is being rewritten/repeat.
Also note that to courts location data/phone records is a different issue than retaining information that merely associates an IP address with the subscriber that used it at some time (knowing which subscriber has what phone number is not an issue either, after all). The latter was ruled to be unproblematic by the ECJ just this year, while for the former the latest ruling is what I outlined earlier.
Besides Germany, some other countries that had data retention laws that were ruled unconstitutional are: Belgium, Bulgaria, Czech Republic, Cyprus, Romania, Slovenia, Slovakia.
In many other places that currently do have mandatory retention in force, it is being challenged.
> Saying that it would be illegal to collect these records in Europe is patently false
It is illegal to mandate in such a manner. There's a difference.
> Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.
You must've not read the part where I said "beyond what is necessary to operate". Telekom for instance is doing just fine deleting phone records after 80 days - or within 7 days if you use a flat-rate and they're not relevant to billing.
> There's really no reason why any service providers should save this stuff
There are many reasons! Most of them are simply contrary to how folks think business should operate. Unfortunately the US seems to value "disruption" over "customer protection", so legally protecting data is unpopular on the hill.
I was under the impression that the government wasn't allowed to create a mandate that a telco has to save all phone records like that, but it doesn't stop a telco from doing it themselves. I think that would fall more under GDPR limitations?
I believe you are correct. That's what I was referring to with "illegal in many countries". Most judgements on this issue predate GDPR, but before GDPR, many countries already had similar laws and attitudes. For example article 2* and 10 of the German constitution protect personal data and communication, not just from others, but also from the government. Not unlike the GDPR.
Some service providers in Europe don't even want to save any data. The linked judgement above was the German state suing Telekom, which didn't want to save that data, and losing. Given the state of affairs, the question of "illegal or not" doesn't really come up as much. At least I'm not aware of any high profile judgements.
Besides Telekom, which always tried to minimize they data they keep to the point of fighting it all the way to Europe's highest courts, most other telcos don't really care and pick whichever middle-ground is available between "must" and "must not". Whatever is least-likely to get them into trouble. Right now that just happens to mean "save little".
If it wasn't for the courts and a decent de-facto "constitution" (collection of treaties really), governments would absolutely love to expand the amount of data they (police, spy apparatus, etc.) have access to. That they also try to reduce the amount of data companies are allowed to save for themselves is tangential.
The court case I linked is evidence of that. The German state wanted Telekom to save more data, but the telco refused and won in court.
According to the article, the data was being made available to other businesses... From the detail level involved, I imagine the NSA has some sweeter deal with telcos... And they have much richer data.
Every txt and phone call, every email and letter sent to your address along with every utility bill (list goes on) has been saved since at least 1999/2000 to present day. People like Bernie went to jail because they pushed back and it was all because of this....
Consumers are so numb to data breaches that these events now bring very little outrage. I think without that anger from the consumer, there's little incentive for companies to do more to stop data breaches from happening.
Well it's starting to feel like data privacy just doesn't exist anymore. I don't know why administrators for big customer databases even bother setting passwords these days.
My mother was concerned that some of her information, and mine, leaked because she signed up for another bank account from a place she decided she didn't trust. She said she wasn't worried about the money being stolen, but she was worried about our identities being stolen.
My concern was the complete opposite - I assume that my social security number and address are already for sale for a fraction of a cent somewhere, bundled with 10,000 other identities. But if money gets stolen, that's a whole rigamarole, with banks wringing their hands and saying "identity theft" as if that clears them from any responsibility.
After Equifax debacle, I don’t think anyone cares. It’ll only be a big deal if there’s a huge B2B leak and business-critical data gets exposed, other than the usual name, address and phone number.
I'm still upset the government hasn't started work on a new national ID program after the Equifax breach. The SSN is not a suitable ID number in this day and age. We need something better that can withstand these kind of things without screwing people for life. My credit will be frozen for the rest of my life, and everyone else should do the same.
This is it for me tbh. Yeah I don't want my identity stolen and I'm still careful but after Equifax I just assume everyone already has my data so all of these data breaches are meaningless to me at this point. It sucks and it makes me mad but all I can do is shake my fist and wish these companies would be better anyway, so what else can I do but just be ok with it?
I think many companies think they can solve this issue by throwing money at their cyber security teams. It just happens that cyber security teams are often ineffective.
It's hard for a CyberSecurity team to be effective when the Execs keep failing the phishing tests and IT does not have the authority to fire them for it.
Good security researchers easily command a $500,000 compensation package per year (cost to companies higher due to benefits like health insurance). When you show the market comp of good cyber security researchers to execs, suddenly they decide that they only have the budget to hire incompetent people.
Good cyber security people are expensive because they are highly skilled: they typically need to have been a software engineer to understand software architectures and have intuition about them, have spent significant time sharpening their skills at hacking by participating in CTFs, and have probably also spent significant time doing reverse engineering and have a few CVEs attributed to them. (Why are these skills needed? Because they are the skills needed by the red team. Every company that takes cyber security seriously will have a red team.) Now tell me whether these people are worth $500,000 per year.
Maybe this is how it is at some places, but in my experience, it is not the case. I have friends who have worked in cyber-security for Fortune 500 companies and almost all of those companies would short-change (or outright ignore) the recommended spend and suggestions of their cyber-security employees, contractors, and advisors.
Where are you getting your information from? The levels of security negligence I hear about aren't even a big ask. Huge companies neglect to do basic things like "don't store your passwords in plain text" or "make sure you salt and hash your passwords".
I don't think it's fair to say cyber security teams are failing if companies are blatantly doing the worst and most obviously wrong things on the daily at the highest levels.
I never understood the american secrecy about SSN... it should be a "username" not a "password"...
In my country you can calculate our own national id (mix of date of birth, autoincreasing number by each birth that day + 1 checksum number), and if you do/have any kind of personal business, your personal tax number has to be written everywhere, on every receipt you hand out or anything you buy as a business.
Somehow knowing that first boy born today will have an ID number of 120702450001X (too lazy to calculate the checksum, but the algorithm is public), doesn't help anyone with anyting bad.
It's because it happened gradually / naturally / semi un intentionally, because:
1) SSN was not intended as a national ID, but it so happened to fit the shape of one, in that almost everyone has one and they're unique.
2) It has never been possible to institute an intentional national ID system in the US for political reasons
That is the recipe for the problem we have now. Strong demand for a national ID from many business purposes, the existence of something that looks a lot like, but is an imperfect form of, national ID, and the refusal to create a proper national ID, has naturally led to a de facto system of abusing the SSN as a national ID and just kind of everyone being a little annoyed and sketched out about it but putting up with it anyway for lack of alternatives.
Incidentally, did you know anyone can generate a valid new EIN (which is a lot like an SSN, and can be used where an SSN can be used for some but not all purposes, specifically filing taxes and ) at this page https://www.irs.gov/businesses/small-businesses-self-employe... ? This isn't legal advice and I'm not a lawyer and I don't know in what situations you personally would be legally permitted to use this (it's meant for businesses, absolutely not some kind of personal alias) -- but technologically, it's just honor system, and anyone can certify they need and are entitled to a new EIN and the IRS web site will provide you with a new unique one. I don't think you even need a legal entity, since you don't need a legal entity to run a business in the US.
> Somehow knowing that first boy born today will have an ID number of 120702450001X
It's even worse. Only post-2011 IIRC births have an algoirthmic SSN. So everyone over the age of 13 still has old fashioned sequential SSNs, where XXX-YY-ZZZZ is determined by
1) XXX is the code for the office that issues your card. Can be guessed precisely and accurately by knowing birth location. For example, I can guess what region of the US you were born in (or lived in when you immigrated) by the first digit. 0 or 1 is probably northeast. 4 or 5 is probably near Texas. 7 might be near Arkansas. Etc.
2) YY-ZZZZ is sequential by date! So by knowing just birth day, can be guessed to within a range. In practice, this means it's easy to guess YY alone, but harder to get all 4 digits of ZZZZ
3) For some stupid reason it got popular to print SSNs with all but the last four digits masked. This is horribly bad because those four are ACTUALLY THE MOST SECRET PART! It's the only part that might not be guessable. But since it's common to be more lax with securing them..... it is super easy to recover the full SSN if you find a piece of paper that says something like
JOHN SMITH
123 Main St
Alabama City, AL 76543
In ref acct: XXX-XX-1234 (2001-03-14)
Dear Mr Smith,
Your account is overdrawn. Have a nice day.
Thinking of you,
The Bank
It also means if someone is personally known to me, even vaguely, I may be able to reconstruct their social seeing nothing but a scrap of paper that has just the last four, if I can guess approximately where and when they were born or first entered the US. If I'm in a situation where I can try several guesses, it's even easier.
A lot of financial things in the US are “secured” or anchored by SSN, that’s the only reason why. That and mother’s maiden name and first vacation and other security questions. It’d be less important with MFA now but SSN is also needed when opening new credit, so having it allows you to pretty easily fake someone else’s identity for credit. KYC hasn’t removed it from the equation.
This comment pops up every time someone talks about social security numbers. Yes, they were never supposed to be private, but now they are. So either Congress can do something about it, or big companies can stop leaking them. Clever "well, actually"s didn't stop my identity from being stolen recently after a breach, and they never will.
I never understood the american secrecy about SSN... it should be a "username" not a "password"...
The problem is banks/financial services do a piss-poor job validating identity when issuing credit/opening accounts. "Oh, you provided an address, a SSN, and [non-random, easily discoverable personal fact]! Sure, here's a CC with a $150k limit!"
It's not the leak that's the problem; it's the ease with which that leaked data is used to either obtain fraudulent credit or access accounts.
I don't have a good answer, because at some point, a financial institution needs to trust people to do business. Customer loses their phone, so MFA doesn't work, ok, now what? I guess the customer needs to have one-time use recovery tokens saved somewhere that can't be lost? How many people do that (not nearly enough)? How many banks even issue those tokens? And what if the token store gets hacked? Now you're really fucked.
SSN is too public for it to be private or secret. Multiple employers, schools, medical institutions, financial institutions all ask for it, so it's not private.
It's also treated as evidence of who you are, but it isn't tied to identification like an ID is. These institutions use it without ever truly validating it.
It's similar to how records fraud can occur - people can record anything to the local registrar office, including fraudulent documents, without any checks. Once it's registered, it becomes evidence against the real owner. It's really messed up.
Even the US gov't gave up on the notion the SSN was not to be used as an identifier. My dad's SS card had a phrase printed on it saying so. My SS card did not have that text.
When I went to college in the late 80s my ssn was automatically used as my student id. When I got my first bank account in 1990, they used my ssn as the account number.
(Payouts are expected to drop in about ten years if no action is taken, but that doesn’t render the SSA irrelevant or cause it to suddenly collapse and shut down, so I assume you mean something else)
Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.
There's possibly a FISA court requirement (too secret to reveal), but AT&T has long been an exceedingly willing part of the gov's spying apparatus. It fed these records and Internet data to the feds without any court order, and only escaped legal troubles when Obama, contrary to his campaign promises, gave AT&T, Verizon and more retroactive immunity
Is the argument that governments don't have a good reason to mandate record collection?
Why can't I ask my government to keep me safe from terrorists but also expect that companies will not just be careless with the data they collect as part of that?
Your address, cell metadata, phone number, email address, and passwords are leaked pretty well contsantly though.
It's not that corporations are incompetent. The laws and regulations mean it's not worth the cost to treat your personal information with any real respect.
I don't like it, but accept it as the lesser evil. I'm from Europe and I believe the number of reported prevented terror attacks. The agencies need data access for that. Not good, but necessary.
But are you aware that Meta, Google, Apple, MS, etc. collect every kind of information about every user of Android, iPhone or WhatsApp, Insta, Facebook, Windows? Phone manufacturer, huge apps like TicToc as well. The kind and size of that data is crazy beyond imagination. I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage. Are you really that naive and think that Meta doesn't analyse their gigantic data lake including billions of WhatsApp messages to predict the results of elections? That is the real danger to democracy.
Look at the same problem with environmental disasters that were created by corporations. The problem with security liabilities is similar? Externalities are hard to get shareholders to pay for.
The shareholders are mostly the pension funds that will eventually pay your money and the banks that already do.
Shareholders can vote and decide the direction of a company. They should also be held liable for any problems the company causes.
If the company is fined it should come out of company and then shareholder pockets. I might even add courts should be able to award damages by directly fining share holders.
If a company does something severely illegal then very large shareholders should risk jail time.
It’s your company after all as a shareholder. You own it.
It’s no different if your dog bites someone or child breaks the law. You have to pay the fines.
Maybe a reasonable first step is third-party standards, audits, and certifications around data security to make privacy- and security-conscious consumers aware of what a company is doing. If consumers really find value in that, then they will preferentially deal with that company, and other companies will follow suit.
This isn't what's being suggested.
Higher ups set the incentive structures that result in dwindling security resources.
If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.
2. Nothing to no consequence to the executives.
3. Lawlessness of such events. Very poor consumer protection laws in this country.
4. Cybersecurity illiterate leadership making cybersecurity decisions.
5. Investing absolute little in Cybersecurity to meet bare-minimum standards.
6. Or all of the above?
Why should the software industry be any different?
How about instead of even more meaningless standards without teeth that don't affect the people pushing for profits over essentials like security, regulators impose punishments that actually affect the investors that ultimately create these perverse incentives in the first place? Nobody should be profiting off of a company that does wrong by over a hundred million people.
AT&T would be nearly equivalent to an E2E service overnight.
The lines wouldn’t be encrypted, so the NSA would still tap them, but at least there would be zero mutable storage in the AT&T data centers (except boot drives, SMS message queues, and a mapping between authorized sims and phone numbers).
In this day and age, why do they even maintain call records? They don’t need them for billing purposes, which was the original purpose of keeping them.
Piercing the veil to prosecute those “responsible” seems like it would just incentivise the business to carry on as normal but with employees that are contractually designated (i.e. forced) to be fall guys if anything goes wrong.
Monopolies can always just pass the cost of the fine to their customers.
Meanwhile, currently businesses are doing shit all about data breaches except handing out the absolutely useless "2 years identity monitoring", so from a consumer view it really can't get much worse.
In general, the idea that penalties make people hide their bad behavior, so we shouldn't penalize bad behavior, is just extremely misguided. Because without penalties, we normalize bad behavior.
Oh but they do, try taking some data that belongs to a corporation and see how quickly law enforcement responds. Aaron Swartz found out the hard way
It’s only when you steal personal data that nobody cares.
Some software engineers are licensed. A company must hire these software engineers, and any changes to what data is saved or how is saved must be signed by these engineers. If a breach occurs, an investigation occurs and if these licensed software engineers are found to be negligent, they lose their license. If they are found to be at fault, they get criminal penalties.
This, of course, must be coupled with penalties for management personals as well.
What you want here is for them not to be holding the data to begin with. The solution to which is to just let customers sue them. Not for $0.30 and "free credit monitoring" but for actual money. Then companies can choose whether they want to mitigate their risk by doing actual security or by not storing the data to begin with, but most likely the second one is their better option.
Then the MAX crashes happened and Boeing is about to negotiate a sweetheart plea deal and there's absolutely zero talk of any of the engineering licenses that were used to sign off on the bad systems getting revoked.
If the licensing system doesn't actually include a threat of career-ending penalties for knowingly signing off on bad designs, or if the system allows executives to bypass engineer signatures, then it seems like the general consensus on here is right: it's useless overhead at best and regulatory capture at worst.
I've met too many recent university graduates that don't even know you need to sanitize database inputs. Which, not their fault, but the university system as it currently exists in relation to software is not set up do do the thing you're asking.
The alternative is to have a really long exam (or a series of them like actuaries do?). Here are 10 random architectures. Describe the security flaws of each and what you would change to mitigate them.
The other change that needs to be made, is that engineers need to be able to describe the bounds of their software. This happens in the other engineering disciplines. A civil engineer can design a bridge with weight capacity X, maybe a pedestrian bridge. If someone builds it and drives semi-trucks over it, that's kinda their problem (and liability).
We would need some sort of way to say "this code is rated for use on an internal network or local only" and, given that rating, hooking it up to the open internet would be legally hazardous.
This is precisely how we end up in a world where we’re all running twenty five year old software.
If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?
Why not advocate for more resources to capture and try the actual criminals? Or, as elsewhere in this thread, simply make this kind of data collection illegal?
If it means that privacy and safety is actually respected then yes. Working in an environment with "exceedingly strict" regulatory oversight would be a reassurance that observed violations will be dealt with in a timely fashion instead of put in the backlog and never addressed.
> Why not advocate for more resources to capture and try the actual criminals?
Yes, why not? While we're at it, let's try and capture the easily-spotted criminals who perform the most trivial of attacks to servers. Just open up your SSH server logs and start going after and preventing the fecktons of log spam that hide real attacks.
> Or, as elsewhere in this thread, simply make this kind of data collection illegal?
Making something illegal is great! Unfortunately it doesn't really do anything to help people after it's been stolen a second time (first time was by AT&T if it were illegal).
The years of lost time argument is disingenuous. Over that number of people, 209 years of lost time from 700 million years of lives is nothing.
Companies that don’t take security seriously and lose peoples data should be punished accordingly.
Companies that sell customers data should be identified.
But if we treat them all the same, then we let the bad companies off the hook, and punish the responsible companies unfairly.
[0] https://www.tlp.law/2023/08/01/fcc-proposes-20-million-fine-....
Why are we using SMS for 2FA everywhere? Why does AT&T have to have residential addresses and KYC for all of its customers? These are the things that should be banned. The government official that mandated all this crap should be forced to sleep with scorpions for 9 years and stink bugs for 3 more years.
If so the leak would be of much less consequence.
Prison time being on the table for officers of the corporation is the only thing that will change this behavior.
Like, it might be an unending atrocity beyond all human comprehension, but, $666/hr soothes a lot of conscience and quiets a lot of tongues.
"Google Referrer Header Privacy Settlement has sent you $0.11 USD."
Deleted Comment
I wonder if this will push companies to stop using external vendors to store and process data. If companies stored all of their info in house, it would prevent the case where compromising one vendor compromises everyone's data. But it would also mean that each individual company needs to do a good job securing their data, which seems like a tall ask.
Cloud computing companies, so-called "tech" companies, and the people who work for them, including many HN commenters, advise the public to store data "in the cloud". They encourage the public, whether companies or individuals, to store their data on someone else's computer that is connected to the open internet 24/7 instead of their own, nevermind offline storage media.
Countless times in HN threads readers are assured by commenters that storing data on someone else's computer is a good idea because "cloud" and "_____ as a service". Silicon Valley VC marketing BS.
"Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible."
Piercing the veil refers to piercing limited liability, i.e., financial liability. Piercing the veil for crimes is relatively rare. Contract or tort claims are the most common causes of action where it is permitted.
There is generally no such thing as "criminal negligence" under US law. Negligence is generally a tort.
As for fines, if there were a statute imposing them, how high would these need to be to make Amazon, Google, Microsoft or Apple employees and shareholders face "real consequences".
Is it negligent for AT&T to decide to give data to a cloud computing company such as Snowflake? HN commenters will relentlessly claim that storing data on someone else's computers that are online 24/7 as a "service", so-called cloud computing, is a sensible choice.
Data centers are an environmental hazard in a time when the environment is becoming less habitable, they are grossly diminishing supplies of clean water when it is becoming scarce, and these so-called "tech" companies are building them anyway.
Data centers are needed so the world can have more data breaches. Enjoy.
It wasn't really a Snowflake breach, if it's like the other Snowflake data leaks, AT&T didn't set up MFA for a privileged account and someone got in with a password compromised by other means. For smaller companies I'd be willing to put more blame on Snowflake for not requiring MFA, but AT&T is large enough to have their own security team that should know what they are doing.
This is yet another wakeup call for all companies - passwords are not secure by themselves because there are so many ways for passwords to be leaked. Even though SMS MFA is weak, it's far better than a password alone.
The comment is about the risk created by transferring data to a third party for online storage.
It is not about the specific details of how data is obtained by unauthorised recipients from the third party.
The act of storing data with third parties who keep it online 24/7 creates risk.
Obviously, the third parties will claim there is no risk as long as ["security"] is followed
If we have a historical record that shows there will always be some deficiency in following ["security"], for whatever reasons,^1 then we can conclude that using the third parties inherently creates risk.
1. HN commenters who focus on the reasons are missing the point of the comment or trying to change the subject.
If customer X gives data to party A because A needs the data to perform what customer has contracted A to do, and then party A gives the data to party B, now customer X needs to worry about both A _and_ B following ["security"]. X should only need to trust A but now X needs to trust B, too. If the data is further transferred to third parties C and D, then there is even more risk. Only A needs the data to perform its obligation to customer X. B, C and D have no obligations to X. To be sure, X may not even know that B, C and D have X's data.
A good analogy is a non-disclosure agreement. If it allows the recipient to share the information with third parties, then the disclosing party needs to be concerned about whether the recipient has a suitable NDA with each third party and will enforce it. Maybe the disclosing party prohibits such sharing or requires that the recipient obtain permission before it can disclose to other parties.^2 If the recipient allows the information to be shared with unknown third parties, then that creates more risk.
2. Would AT&T customers have consented to their call records being shared with Snowflake. The people behind so-called "tech" companies like Snowflake know that AT&T customers have no say in the matter.
I really dislike this attitude.
AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T, and to the extent you're right, then I agree that they should be fined in a bigger manner.
But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.
Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it. But that's exactly the situation these organizations are in - anyone on Earth can attack them, very much including basically armies. We cannot expect organizations to be able to defend themselves forever, it is an impossible ask in the long run. This has to be solved by the equivalent of a standing army protecting a country, and by going after the criminals who do these breaches.
AT&T's data was compromised as one of Snowflake's many customer breaches (Ticketmaster/LiveNation, LendingTree, Advance Auto Parts, Santander Bank, AT&T, probably others [0][1]), which occurred and were notified in 4/2024 (EDIT: some reports says as far back as 10/2023). Supposedly these happened because Snowflake made it impossible to mandate MFA; some customers had credentials stolen by info-stealing malware or obtained from previous data breaches. Snowflake called it a “targeted campaign directed at users with single-factor authentication”. The Mandiant report tried to blame unnamed Snowflake employee (solutions engineer) for exposing their credentials.
How much responsibility Snowflake had, vs its clients, is not clear (for example, seems they only notified all other customers May 23, not immediately when they suspected the first compromise). Reducing the analysis to pure "victims" and "criminals" is not accurate. When you say "criminally prosecute those whose negligence made this possible", it wouldn't make sense to prosecute all of Snowflake's clients but not Snowflake too. Or only the cybercriminals but not Snowflake or its clients.
[0]: The Ticketmaster Data Breach May Be Just the Beginning (wired.com) https://news.ycombinator.com/item?id=40553163
[1]: 6/24 Snowflake breach snowballs as more victims, perps, come forward (theregister.com) https://news.ycombinator.com/item?id=40780064
A better analogy is not a bank defending against an army, but a bank forgetting to install doors, locks, cameras, or guards. _Yes_, the criminals are the root cause, but human nature being what it is it's negligent to leave a giant pile of money and data completely unprotected.
You picked the wrong point to counter with. The real problem is that the corporate decision-makers who bear the most responsibility will never be held accountable. They will always be able to shift blame to someone below them in the corporate hierarchy.
I am sure LEOs will do what they are paid to do and catch criminals. In the meantime, I would like to focus on service provider not being able to provide a reasonable level of privacy.
I am blaming a corporation, because for most of us here it is an ongoing, recurring pattern that we have recognized and corporations effectively codified into simple deflection strategy.
Do I assume the corporation messed up? Yes. But even if I didn't, there is a fair amount of historical evidence suggesting that security was not a priority.
<< Put another way - you trust a classical bank, with a money, to secure your money from criminals.
Honestly, if average person saw how some of those decisions are made, I don't think a sane person would.
<< But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.
Ahh, yes. Poor corporation has become too big of a target. Can you guess my solution to that? Yes, smaller corporation with MUCH smaller customer base and footprint so that even if the criminal element manages to squeeze through those defenses that the corporation made such a high priority ( so high ), the impact will be sufficiently minimal.
I have argued for this before. We need to make hoarding data a liability. This is the only way to make this insanity stop.
So what is Snowflake normally doing with all that AT&T data? Redistributing it to "marketing partners"? Apparently. Snowflake's mission statement, from their web site:
"Our mission is to break down data silos, overcome complexity and enable secure data collaboration between publishers, advertisers and the essential technologies that support them."
So this was not, apparently, a break-in to the operational side of AT&T. Someone unauthorized got hold of data they were already selling to marketers. Is that correct?
[x] Objective Achieved
"Alternate credit scoring, hyper-targeted marketing and more... an emerging trend of companies building partnerships with telecoms to power use cases across multiple industries." was the blurb for the unit Snowflake specially set up for Telco data in early 2023 touting "location data", but this product is not aimed at the telco's use-case; coincidentally this was also around the time Snowflake was touting integration with GenAI.
(It's not "competitor analysis" either, because if it was they would have obscured the 68m former phone numbers to prevent abuse by direct-marketing.)
[0]: "Unlocking the Value of Telecom Data: Why It’s Time to Act" https://www.snowflake.com/blog/telecom-data-partnerships/
Given the nature of the data in the database and the platform it was stored in, it seems extremely likely this data was not meant to be used internally by AT&T but was instead meant to be used externally by either a 3rd party partner (like advertisers and consumer analytics partners) or a government agency.
In other words, if it were my data in this datastore, I’d consider my data as already having been “leaked” when it went into the store - the issue here appears to be that this data was “leaked” to the wrong people from the perspective of AT&T and the FBI.
Doesn't matter if it's AT&T, a bank, or the government. Never under any circumstances can you expect anything sensitive to stay private. This used to be taught as gospel when introducing kids to the internet - it's crazy how much things have changed in 20 years.
Dead Comment
https://www.marketwatch.com/investing/stock/T
https://www.marketwatch.com/investing/stock/SNOW
I would bet any effects you’re seeing in stocks is unrelated to this news.
This is precisely why breaches keep happening and will keep happening. It cost money to implement security. There's no cost benefit to spending that time and money since there are no consequences.
Businesses do not spend money unless it will make them money or save them money.
There needs to be a hefty federal fine on a per-affected-user basis for data breaches. Also a federal fine for each day a breach is unreported.
That money should go into a pool which can be accessed by people who have their identity stolen.
https://www.comparitech.com/blog/information-security/data-b...
"Stocks of breached companies on average underperformed the NASDAQ by -3.2% in the six months after a breach disclosure"
That said, it's not clear what the long term impact is on stock price (if there is any).
Also as corroboration here's MarketWatch: "AT&T’s stock slides 3% after company discloses hack of calls and texts" [https://www.marketwatch.com/story/at-ts-stock-slides-2-9-aft...]
We need regulations with massive fines, class action lawsuits (a ban on arbitration clauses), and maybe automatic minimum level compensation to those customers.
This might also explain why there's little visible effect on other cloud database services either. After all, the attack is pretty simple and potentially affects any cloud database that allows access from the Internet.
Deleted Comment
In this case, Snowflake was also the cause for the Ticketmaster and Lending Tree breaches according to the article so…
real lack of trust in Snowflake now.
Credential rotation, SSO, PrivateLink or IP allowlists all should be used with PII.
Deleted Comment
class action lawsuit just going to result in everyone’s $2 being given as a free trial of a ringtone addon from the early 2000s that converts into more recurring revenue
There's really no reason why any service providers should save this stuff in the first place, and it isn't hard to fix with legislation. Just make it illegal to even keep.
[1] https://curia.europa.eu/juris/document/document.jsf?text=&do...
On the contrary, many European countries have mandatory data retention periods that meet or exceed the 6 months of records that were supposedly included in this breech.
Germany has one of the shorter retention periods at 10 weeks, but they still have to keep those records.
Saying that it would be illegal to collect these records in Europe is patently false, and furthermore the record collection is generally mandated for a period of time that depends on the country.
> There's really no reason why any service providers should save this stuff in the first place,
Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.
No they don't, because it's "suspended" by the federal network agency until courts are through with it. In fact they suspended it three days before the law would've come into force and thus it never was. The current state of affairs is this: the retention was ruled incompatible with German and European law in an injunction and it does not look like that is about to change.
There's a similar picture in many EU countries: There's a law on the books, but it can't be enforced/is being challenged/was already invalidated/is being rewritten/repeat.
Also note that to courts location data/phone records is a different issue than retaining information that merely associates an IP address with the subscriber that used it at some time (knowing which subscriber has what phone number is not an issue either, after all). The latter was ruled to be unproblematic by the ECJ just this year, while for the former the latest ruling is what I outlined earlier.
Besides Germany, some other countries that had data retention laws that were ruled unconstitutional are: Belgium, Bulgaria, Czech Republic, Cyprus, Romania, Slovenia, Slovakia.
In many other places that currently do have mandatory retention in force, it is being challenged.
> Saying that it would be illegal to collect these records in Europe is patently false
It is illegal to mandate in such a manner. There's a difference.
> Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.
You must've not read the part where I said "beyond what is necessary to operate". Telekom for instance is doing just fine deleting phone records after 80 days - or within 7 days if you use a flat-rate and they're not relevant to billing.
There are many reasons! Most of them are simply contrary to how folks think business should operate. Unfortunately the US seems to value "disruption" over "customer protection", so legally protecting data is unpopular on the hill.
Some service providers in Europe don't even want to save any data. The linked judgement above was the German state suing Telekom, which didn't want to save that data, and losing. Given the state of affairs, the question of "illegal or not" doesn't really come up as much. At least I'm not aware of any high profile judgements.
Besides Telekom, which always tried to minimize they data they keep to the point of fighting it all the way to Europe's highest courts, most other telcos don't really care and pick whichever middle-ground is available between "must" and "must not". Whatever is least-likely to get them into trouble. Right now that just happens to mean "save little".
* It's not stated explicitly in article 2, but the German constitutional court decided that it follows from those personal rights: https://en.wikipedia.org/wiki/Informational_self-determinati...
The court case I linked is evidence of that. The German state wanted Telekom to save more data, but the telco refused and won in court.
The NSA’s power is in being boring and unnoticed. This could be a revenue rider.
Just saying.
Deleted Comment
Deleted Comment
My concern was the complete opposite - I assume that my social security number and address are already for sale for a fraction of a cent somewhere, bundled with 10,000 other identities. But if money gets stolen, that's a whole rigamarole, with banks wringing their hands and saying "identity theft" as if that clears them from any responsibility.
Start issuing multi billion dollar fines for these breaches and suddenly companies are invested in security.
Unfortunately with government agencies getting defanged as part of recent SCOTUS ruling, it’s likely not possible.
Have to rely on civil court to issue fines now (ie, class action lawsuits).
Good cyber security people are expensive because they are highly skilled: they typically need to have been a software engineer to understand software architectures and have intuition about them, have spent significant time sharpening their skills at hacking by participating in CTFs, and have probably also spent significant time doing reverse engineering and have a few CVEs attributed to them. (Why are these skills needed? Because they are the skills needed by the red team. Every company that takes cyber security seriously will have a red team.) Now tell me whether these people are worth $500,000 per year.
Where are you getting your information from? The levels of security negligence I hear about aren't even a big ask. Huge companies neglect to do basic things like "don't store your passwords in plain text" or "make sure you salt and hash your passwords".
I don't think it's fair to say cyber security teams are failing if companies are blatantly doing the worst and most obviously wrong things on the daily at the highest levels.
The fact we don't have decent legislation to materially punish incompetent organizations is beyond absurd.
Security is not a concern. There is no real incentive to change the status quo. Make them pay for monitoring indefinitely .
In my country you can calculate our own national id (mix of date of birth, autoincreasing number by each birth that day + 1 checksum number), and if you do/have any kind of personal business, your personal tax number has to be written everywhere, on every receipt you hand out or anything you buy as a business.
Somehow knowing that first boy born today will have an ID number of 120702450001X (too lazy to calculate the checksum, but the algorithm is public), doesn't help anyone with anyting bad.
1) SSN was not intended as a national ID, but it so happened to fit the shape of one, in that almost everyone has one and they're unique.
2) It has never been possible to institute an intentional national ID system in the US for political reasons
That is the recipe for the problem we have now. Strong demand for a national ID from many business purposes, the existence of something that looks a lot like, but is an imperfect form of, national ID, and the refusal to create a proper national ID, has naturally led to a de facto system of abusing the SSN as a national ID and just kind of everyone being a little annoyed and sketched out about it but putting up with it anyway for lack of alternatives.
Incidentally, did you know anyone can generate a valid new EIN (which is a lot like an SSN, and can be used where an SSN can be used for some but not all purposes, specifically filing taxes and ) at this page https://www.irs.gov/businesses/small-businesses-self-employe... ? This isn't legal advice and I'm not a lawyer and I don't know in what situations you personally would be legally permitted to use this (it's meant for businesses, absolutely not some kind of personal alias) -- but technologically, it's just honor system, and anyone can certify they need and are entitled to a new EIN and the IRS web site will provide you with a new unique one. I don't think you even need a legal entity, since you don't need a legal entity to run a business in the US.
It's even worse. Only post-2011 IIRC births have an algoirthmic SSN. So everyone over the age of 13 still has old fashioned sequential SSNs, where XXX-YY-ZZZZ is determined by
1) XXX is the code for the office that issues your card. Can be guessed precisely and accurately by knowing birth location. For example, I can guess what region of the US you were born in (or lived in when you immigrated) by the first digit. 0 or 1 is probably northeast. 4 or 5 is probably near Texas. 7 might be near Arkansas. Etc.
2) YY-ZZZZ is sequential by date! So by knowing just birth day, can be guessed to within a range. In practice, this means it's easy to guess YY alone, but harder to get all 4 digits of ZZZZ
3) For some stupid reason it got popular to print SSNs with all but the last four digits masked. This is horribly bad because those four are ACTUALLY THE MOST SECRET PART! It's the only part that might not be guessable. But since it's common to be more lax with securing them..... it is super easy to recover the full SSN if you find a piece of paper that says something like
JOHN SMITH
123 Main St
Alabama City, AL 76543
In ref acct: XXX-XX-1234 (2001-03-14)
Dear Mr Smith,
Your account is overdrawn. Have a nice day.
Thinking of you,
The Bank
It also means if someone is personally known to me, even vaguely, I may be able to reconstruct their social seeing nothing but a scrap of paper that has just the last four, if I can guess approximately where and when they were born or first entered the US. If I'm in a situation where I can try several guesses, it's even easier.
The problem is banks/financial services do a piss-poor job validating identity when issuing credit/opening accounts. "Oh, you provided an address, a SSN, and [non-random, easily discoverable personal fact]! Sure, here's a CC with a $150k limit!"
It's not the leak that's the problem; it's the ease with which that leaked data is used to either obtain fraudulent credit or access accounts.
I don't have a good answer, because at some point, a financial institution needs to trust people to do business. Customer loses their phone, so MFA doesn't work, ok, now what? I guess the customer needs to have one-time use recovery tokens saved somewhere that can't be lost? How many people do that (not nearly enough)? How many banks even issue those tokens? And what if the token store gets hacked? Now you're really fucked.
It's also treated as evidence of who you are, but it isn't tied to identification like an ID is. These institutions use it without ever truly validating it.
It's similar to how records fraud can occur - people can record anything to the local registrar office, including fraudulent documents, without any checks. Once it's registered, it becomes evidence against the real owner. It's really messed up.
What about people who have called suicide helplines, abortion clinics, loan servicing, etc...
With the numbers available, that will be possible to find out...
Go Jackets.
(Payouts are expected to drop in about ten years if no action is taken, but that doesn’t render the SSA irrelevant or cause it to suddenly collapse and shut down, so I assume you mean something else)