Just moved our infra from GCP to AWS. Kubernetes clusters, LB, storage, lambdas, KMS and all of it.
Google runs their tech stack as if it's a startup that builds their CV. Everything is immature, tons of hacks, undocumented features. If you are on their k8s there are tons of upcoming new versions and features that force you to revisit key hacks you put in your infra because of their misgivings. Our infra team keeps tinkering around our infra and it never ends. It's 50:50. 50% of time making sure we are prepared for their shit and 50 % our ambitious infra plans. Good luck with that.
With AWS our bill is 60% of what GCP used to be running 3 k8s clusters.
AWS support is so nice, you can't believe it.
Nah, I don't trust Google with anything. It's a scam. Google's support is horrendous. They refer you to idiots that drag you through calls until your will for life dies. And you're back to the mercy of some lost engineer that may comment on a github issue you opened 20 days ago. We have a bug reported back in 2020 that got closed recently without any action because it became stale and the API changed so much it doesn't really matter. It's that bad.
The billing day is a monthly reminder you're paying entitled devs to do subpar work other companies do a lot better.
Interesting, if you swap GCP and AWS in your post then thats exactly my experience.
I wonder what makes us different, I work in europe on video games; AWS’s handling of me when I was at Ubisoft left a really sour taste - when I moved into Tencent/Sharkmob I tried really hard to love AWS as it was the defacto industry standard and instead I was left with a feeling that most of it is inconsistent garbage papered over with lambda functions. I referred to these weird gotchas as “3am topics”; things that I don't have the mental capacity to deal with at 3am and convinced the studio to switch to GCP- which, incidentally they are still extremely grateful to me for doing.
It's amazing how people complain about GCP. We run a massive deployment across 100+ regions cross-cloud GCP, Azure, AWS and oh boy. GCP has good support if you are big enough. Azure though which has a much bigger share than GCP is horrendous. Absolutely garbage all around. Good luck ever getting anyone in Engineering even if you are paying for support. AWS on the other hand - Amazing. We have Ent Support so those guys in our slack channel. The TAMs are amazing. Need to get hold of someone in Route53 no problem they are on the call this week. Feature request for EKS - ok talk to the Product Manager this afternoon.
Can you give Azure specifics, as you know Azure has a massive offering.
My experience has been the opposite though not without issues, Azure has some of the best corporate and security features of any cloud and it's only getting better. The zero trust model fits in so nicely with their identity platforms it's a sight to behold compared to other cloud providers which likely use some form of AAD or AD DS anyway.
Their support is responsive and they seem to know what they're talking about. (AKS)
This reminds me of the fond days of having weekly customers calls. We develop AWS services, and we answer our customer-support calls directly. No middle man. Just techies to techies. And we made promises to customers on the fly, and customers sometimes project managed us.
We have an old "quiet part out loud" corporate story. It's about how one arm of Google using our service and wondering why it had so much downtime, only for us to point at their GAE arm and say "when they're down, we're down". They went and talked to GAE and - funny enough - were able to correlate the downtime they observed with GAE downtime.
GAE uptime improved, for a little while. Yeah, we're on AWS now too.
At one point, Google reached out to me to try and tempt us over from AWS. I had bad experiences with Google support in the past, but liked their AI stuff and was keen to give them another go.
We booked a follow up call in the calendar, I spent good time preparing my notes and requirements for the meeting... and then nobody on their side showed up or contacted me again.
Much of the time GCP feels like a science project, and not a real business. AWS (and Azure) seem to be driven by customer requests, instead of Google, which feels very engineering-centric.
Which is on brand with Google. They have no problem launching stuff, and no problem killing stuff. But man, then just get out of the cloud business and focus on what you're good at.
It's actually sort of ridiculous. AWS has the best support I have ever interacted with. I mean, our org certainly pays enough for it but it's so completely unusual in tech, or really any sector to get great support even when you're paying for it.
I worked in a Digital team 4 years back where the team was building voice channel apps for our customers on both Amazon Alexa and Google Dialogflow. Alexa NLP engine was less sophisticated we had to give it hundreds of prompts and intents. Dialogflow NLP engine required a handful of prompts for the same thing. But when it came to integration with backend APIs and support Alexa was far ahead. Despite having Dialogflow enterprise Google support would suggest to ask in StackOverflow. Amazon support on the other hand was excellent. We needed support for mTLS with the backend APIs, Amazon supported it as they understood enterprise. Google just shooed us away, their support wouldn’t even escalate this.
I don't know. I like GCP. I have been in an Azure centric corporation for close to two years now and I dearly miss GCP almost every day.
My team has a sort of a sandbox where we can use almost any Azure product we want (our IT is supportive and permissive as far as that sandbox goes, which is a blessing), but even then it's just painful in comparison.
There is no way this is true. Only explanation is you work for AWS :-).
GCP strength is it's cost. Yes may be the support could be better. But can you care to explain what "hacks" are you talking about ? And the claim that K8S(from Google) is better on AWS than GCP is absolutely false
Our reason for going all in with GCP was the k8s. We've been using GCP for 2+ years.
The trouble we have is with stability and so many of the features being constantly rolled out.
Our experience was that K8s cost more on GCP than AWS.
Just on LoadBalancers alone, you have tons of tricks that are specific to GCP implementation. And we needed a few extra because you couldn't run all the features we wanted on 1-2 per cluster.
For example, we have a 3rd party that required all our requests to always originate and respond back from a fixed IP address. We could only pick one not a range, not a list. This was a hard requirement. The service was important so we had to do it.
It took our team several days to find how to do it using online documentation and support. Tech support was useless. We had one guy in our team that spent 2 days on the phone with a paid, local GCP implementation partner trying to get this problem sorted. Nothing came out of it other than being pitched on our dime a lot of services and architecture we didn't need. Eventually we figure it out on our own. I don't even remember speaking about this when we transitioned to AWS.
Matches my experience. GCP has many better services than AWS but I am not going to run production workload with them after 2 years of experience in previous company. There are so many undocumented quirks that many times you could find better solution from some random person in stackoverflow than highest tier paid support.
That was my experience, too - a couple of things which were better than AWS but this constant stream of paper cuts hitting all of the problems which weren’t cool enough to get someone promoted.
I generally like GCP, however their sales and customer support just aren't any good. And some services like Vertex AI are extremely buggy while it's hard to actually report these bugs.
I think Google Cloud needs someone like Jeff Bezos as their head: Look what your customers actually want and need and understand their requirements. And they usually want good customer support and want a competent key account manager as well.
When we were looking to migrate our analytics database from on-premise to a cloud alternative we were looking at BigQuery and Snowflake. BigQuery is a great product and we were already deeply invested in GCP as well. However the GCP sales team just couldn't sell BigQuery - they just don't know what old corporations want to hear in a sales pitch. So we went with Snowflake in the end. Not because it's the better product but because their sales team is better.
I'm not sure if the cloud business is actually a priority at Google. If it is then I think they don't understand the mistrust Google is facing when it comes to stable long term support of their products.
The horror stories of Google support, across all of their products, is enough for me to never trust GCP. Even if someone told me today "GCP is the exception, they have great support" I probably wouldn't care - they are so organizationally incapable of providing good support that, even if they did so today, I wouldn't believe that it could last.
Support wise, GCP is a joke run by entitled people. I had an issue some time ago with a VPN and after doing a lot of troubleshooting and having them agree the problem is on their end (packets would go in their VPN Gateway from the VPC, nothing would come out), the solution was to update my configuration on my end to workaround whatever they did because "it is how is going to be"...
“ According to the Amazon Prime Day blog post, DynamoDB processes 126 million queries per second at peak. Spanner on the other hand processes 3 billion queries per second at peak, which is more than 20x higher, and has more than 12 exabytes of data under management.”
This comparison seems to be not exactly fair? Amazon’s 126 million queries per second was purely for Amazon-related services serving Prime Day generating this on DynamoDB, and not all of AWS is my read.
What would have perhaps been a more fair comparison is to share the peak load that Google services running Cloud Spanner, and not the sum of all Spanner services across all of GCP and all of Google (Spanner on non-GCP infra).
I will say that it would show a massive of confidence to say that Photos, Gmail and Ads heavily rely on GCP infra: which would be brand new information for me! It would add to confidence to learn more on how they use it, and if Cloud Spanner is on the critical path for those services.
What is confusing, however, is how in this article "Cloud Spanner" is consistently used... except for when talking about Gmail, Ads and Photos, where it's stated that "Spanner" is used by these products, not "Cloud Spanner!". Like if they were not using the Cloud Spanner infra, but their own. It would help to know what is the case, and what the load of Cloud Spanner is: and not Spanner running on internal Google infra that is not GCP.
At Amazon, practically every service is built on top of AWS - a proper vote of confidence! - and my impression was that GCP had historically been far less utilised by Google for their own services. Even in this post, I'm still confused and unable to tell if those Google products listed use Cloud Spanner or their own infra running Spanner.
> DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 126 million requests per second.
Amazon was very, very clear on this. For Google to use that number without the caveat is just completely underhanded and dishonest. Whoever wrote this is absolutely lacking in integrity.
I used DynamoDB as part of the job a few years ago and never got single-millisecond responses - it was 20ms minimum and 70+ on a cold-start, but I can accept that optimising Dynamo's various indexes is a largely opaque process. We had to add on hacks like setting the request timeout to 5ms and keeping the cluster warm by submitting a no-op query every 500ms to keep it even remotely stable. We couldn't even use DAX because the Ruby client didn't support it. At the start we only had a couple of thousand rows in the table so it would have legit been faster to scan the entire table and do the rest in memory. Postgres did it in 5ms.
If Amazon said they didn't use DAX that day I would say they were lying.
The average consumer or startup is not going to squeeze out the performance of Dynamo that AWS is claiming that they have achieved.
In fact, it might have been fairer in Ruby if they didn't hard-code the net client (Net/HTTP). I imagine performance could have been boosted by injecting an alternative.
They very well know that people don't read sh* anymore. Just throw numbers there, PowerPoint them and offer an "unbiased" comparison where Google shines - buy Google.
Worst case scenario, it's Google you're buying, not a random startup etc.
Just as a hand in the air...Be careful about what you're comparing here. # of API calls over a period of time is...largely irrelevant in the face of QPS. I can happily write a DDOS script that massively bombards a service, but if that halts my QPS then it doesn't matter. So sure, trillions of API calls were made (still impressive in the scope of the overall network of services, I'm not downplaying that), but ultimately, for DynamoDB and Spanner, it's the QPS that mattered to us in terms of comparisons of DB scaling and performance.
We shared some details about Gmail's migration to Spanner in this year's developer keynote at Google Cloud Next [0] - to my knowledge, the first time that story has been publicly talked about.
I tried to find it in this video, but failed. Could you please share a time stamp on where to look?
It’s a pretty big deal if Gmail migrated to GCP-provided Spanner(not to an internal Spanner instance) and sounds like he kind of vote of confidence GCP and Cloud Spanner could benefit from: might I suggest to write about it? It’s easier to digest and harder to miss than an hour-long keynote video with no time stamps.
And so just to confirm: Gmail is on Cloud Spanner for the backend?
Wow, almost content-free presentation! How obnoxious!
This wasn't the first time Gmail has replaced the storage backend in-flight. The last time, around 2011, they didn't hype it up, they called it "a storage software update" in public comms. And that other migration is the origin of the term "spannacle", because during that migration the accounts that resisted moving from [[redacted]] to [[redacted]] we called barnacles.
> I will say that it does show a vote of confidence to say that Photos, Gmail and Ads use GCP infra,
I'm not sure? I guess I'm mostly not sure what "gcp infra" means there. The blog post says
"Spanner is used ubiquitously inside of Google, supporting services such as; Ads, Gmail and Photos."
But there's google-internal spanner, and gcp spanner. A service using spanner at Google isn't necessarily using gcp. (No clue about photos, Gmail, etc)
Granted, from what I gather, there's a lot more similarity between spanner & gcp spanner than e.g. borg and kubernetes.
Surely in a post about Google Cloud Spanner, all examples mentioned use Google Cloud Spanner? It would be moot listing them as examples if they would not: so my assumption is they are all using GCP infra already for Spanner.
I really want to give Google the benefit of the doubt: but it doesn't help that they did not write that eg Gmail is using "Cloud Spanner." They wrote that it uses Spanner.
Infra and Cloud Spanner are the same stack. Having those services run on infra is more about the legacy of tooling to shift it rather than anything around performance or ability to handle it.
>This comparison seems to be not exactly fair? Amazon’s 126 million queries per second was purely for Amazon-related services serving Prime Day generating this on DynamoDB, and not all of AWS is my read.
There's no indication that google is talking about ALL of spanner either? The examples they list are all internal google services, and they specifically say "inside google".
I'm also dubious that even with all of the AWS usage accounted for that DynamoDB tops Spanner if Amazon themselves are only at 126 million queries per second on Prime Day.
> At Amazon, practically every service is built on top of AWS - a proper vote of confidence!
Not only this, but practically most, if not all, of the AWS services use DynamoDB, including use cases that are usually not for databases, such as multi-tenant job queues (just search "Database as a Queue" to get the sentiment). In fact, it is really really hard to use any relational DB in AWS. I mean, a team would have to go through a CEO approval to get exceptions, which says a lot about the robustness of DDB.
Eh, this isn't accurate. Both Redshift and Aurora/RDS are used heavily by a lot of teams internally. If you're talking specifically about the primary data store for live applications, NoSQL was definitely recommended/pushed much harder than SQL, but it by no means required CEO approval to not use DDB
Edit: It's possible you're limiting your statement specifically to AWS teams, which would make it more accurate, but I read the use of "Amazon" in the quote you were replying to as including things like retail as well, etc.
When I was at AWS, towards later part of my tenure, DynamoDB was mandated for control plane. To be fair, it worked, and worked well, but there were times when I wished I could use something else instead.
> What would have perhaps been a more fair comparison is to share the peak load that Google services running on GCP generated on Spanner, and not the sum of their cloud platform.
Not necessarily about volume of transactions, but this is similar to one of my pet-peeves with statements that use aggregated numbers of compute power.
"Our system has great performance, dealing 5 billion requests per second" means nothing if you don't break down how many RPS per instance of compute unit (e.g. CPU).
Scales of performance are relative, and on a distributed architecture, most systems can scale just by throwing more compute power.
Yeah I've seen some pretty sneaky candidates try that on their resumes. They aggregate the RPS for all the instances of their services even though they don't share any dependencies nor infrastructure. They're just independent instances/clusters running the same code. When I dug into those impressive numbers and asked about how they managed coordination/consensus the truth comes out.
True, but one would hope that both sides in this case would be putting their best foot forward. Getting peak performance out of right sizing your DB is part of that discussion. I can't imagine AWS would put down "126 million QPS" if they COULD have provided a larger instance that could deliver "200 million QPS", right? We have to assume at some point that both sides are putting their best foot forward given the service.
Put yourself in the shoes of who they're targeting with that.
Probably dealing with thousands of requests per seconds, but wants to say they're building something that can scale to billions of requests per second to justify their choices, so there they go.
it does depend on what you mean. By 2020/2021, effectively everything was on top of AWS VMs/VPC and perhaps LBs at that point? Most if not all new services were being built in NAWS.
And for many projects, Postgres is still cheaper than both. Having used both, I would much, much rather do the work to fit my project in Postgres/CockroachDB than use either Spanner or DynamoDB, which have WAY more footguns. Not to mention sudden cost spikes, vendor lock in, and god knows what else.
AWS and GCP (and Azure, and Oracle cloud, and bare Kubernetes via an operator, and...) support Postgres really well. Just...use Postgres.
> And for many projects, Postgres is still cheaper than both.
ok? and sqlite3 in memory is even cheaper than postgres!
if you can use (and support correctly) postgres then you should use it, obviously there's no point using a globally scalable P-level database if you can just fit all your data on one machine with posthgres.
Except for projects for which NoSQL is a better fit than a RDBMS, no?
If I'm writing a chat app with millions of messages and very little in the way of "relationships", should I use Postgres or some flavor of NoSQL? Honest question.
Postgres. NoSQL databases are specialized databases. They are best-in-class at some things, but generally that specialization came at great cost to their other options. DynamoDB is an amazing key-value store, but is severely limited at everything else. Elasticsearch is an amazing for search and analytics, but is severely limited at everything else. Other specialized databases that are SQL-full are also great at what they do, like Spark is a columnar database that has amazing capabilities for massive datasets where you need lots of cross-joins, but that severely limits it's ability to act in a lot of roles, because they traded latency for throughput and horizontal scalability, and you're restricted in what you can do with it.
The super-power of Postgres is that it supports everything. It's a best-in-class relational database, but it's also a decent key-value store, it's a decent full-text search engine, it's a decent vector database, it's a decent analytics engine. So if there's a chance you want to do something else, Postgres can act as a one-stop-shop and doesn't suck at anything but horizontal scaling. With partitioning improving, you can deal with that pretty well.
If you're writing fresh, there is basically no reason not to use Postgres to start with. It's only when you already know your scale won't work with Postgres that you should reach for a specialized database. And if you think you know because of published wisdom, I'd recommend you set up your own little benchmark, generate the volume of data you want to support, and then query it with Postgres and see if that is fast enough for you. It probably will be.
Golden Rule of data: Use PostgreSQL unless you have an extremely good reason not to.
PostgreSQL is extremely good at append-mostly data, i.e like a chat log and has powerful partitioning features that allow you to keep said chat logs for quite some time (with some caveats) while keeping queries fast.
Generally speaking though PostgreSQL has powerful features for pretty much every workload, hence the Golden Rule.
Millions is tiny. Toy even. (I work on what could be called a NoSQL database, unfortunately "NoSQL" is a term without specificity. There's many different ways to be a non-relational database!)
My advise to you is to use Postgresql or, heck, don't over think it, sqlite if it helps you get a MVP done sooner. Do NOT prematurely optimize your architecture. Whatever choice results in you spending less time thinking about this now is the right choice.
In the unlikely event you someday have to deal with billions of messages and scaling problems, a great problem to have, there are people like me who are eager to help in exchange for money.
Lots of people like to throw around the term "big data" just like lots of people incorrectly think that just because google or amazon need XYZ solution that they too need XYZ solution. Lots of people are wrong.
If there exists a motherboard that money can buy, where your entire dataset fits in RAM, it's not "big data".
I've found it's pretty easy to massage data either way, depending on your preference. The one I'm working on now ultimately went from postgres, to mysql, to dynamo, the latter mainly for cost reasons.
You do have to think about how to model the data in each system, but there are very few cases IMO where one is strictly 'better.'
Either way can work. Getting to millions of messages is going to be the hard part, not storing them.
As with all data storage, the question is usually how do you want to access that data. I don't have experience with Postgres, but a lot of (older) experience with MySQL, and MySQL makes a pretty reasonable key-value storage engine, so I'd expect Postgres to do ok at that too.
I'm a big fan of pushing the messages to the clients, so the server is only holding messages in transit. Each client won't typically have millions of messages or even close, so you have freedom to store things how you want there, and the servers have more of a queue per user than a database --- but you can use a RDBMS as a queue if you want, especially if you have more important things to work on.
This is going to feel like a non-answer: but if you need to ask this question in this format, save yourself some great pain and use Postgres or MongoDB, doesn't really matter which, just something known and simple.
Normally you'd make a decision like this by figuring out what your peak demand is going to look like, what your latency requirements are, how distributed are the parties, how are you handling attachments, what social graph features will you offer, what's acceptable for message dropping, what is historical retention going to look like...[continues for 10 pages]
But if you don't have anything like that, just use something simple and ergonomic, and focus on getting your first few users. There's a long gap between when the simple choice will stop scaling and those first few users.
just migrated off of PG to ddb as the main db for my application (still copying data to SQL for data analytics). Working with distributed functions and code hosted on lambdas, the connection management to SQL became a nightmare with dropped requests all over the place.
But that's kind of a moot point. I mean, if you're even looking at the likes of DynamoDB or Spanner, it's because you need the scale of those engines. PostgreSQL is fantastic, and even working for Google, I 100% agree with you. Just use PG...until you can't. Once you're in the realm of Spanner and DynamoDB, that's where this discussion becomes more of a thing.
Not necessarily true. DynamoDB on demand pricing is actually way cheaper than RDS or EC2 based anything for small workloads, especially when you want it replicated.
Postgres and Spanner do different things, in different ways, with different costs, risks, and implications. You could "just use" anything that is completely different and slightly cheaper. You could use a GitHub repository and just store your records as commits, for free, that's plenty cheap and works for small projects. But not really the same thing, is it?
My point is that I've seen very, very few situations (I can think of two in my entire career so far) where a "hyperscale NoSQL database" was actually the right choice to solve the problem. I find that a lot of folks turn to these databases for imagined scale needs, not actual hard problems that need solving.
It's not purely a matter of cost, right? Say you want or need a highly available, high performance distributed database with externally consistent semantics. Are you going to handle the sharding of your Postgres data yourself? What replication system will you use for each shard? How will you ensure strong consistency? Will you be able to do transactions across shards? These are problems that systems like Spanner, CockroachDB, etc solve for you.
Just curious, why would distributed be design requirement? Is individual machine failure likely in AWS/GCP? The only failure I have seen in region level issues which spanner or dynamo don't help with AFAIK.
It's the best, but it's far from perfect. Default mode is non-ACID, and going `serializable` mode makes it very slow. Spanner is always ACID... but always slow.
I know the spanner marketing blurb says you can scale down etc. But I think in practice spanner is primarily aimed at use cases where you'd struggle to fit everything in a single postgres instance.
Having said that I guess I broadly agree with your comment. It seems like a lot of people like to plan for massive scale while they have a handful of actual users.
I said this in another comment, but I have seen _two_ applications in my career that actually had a request load that might warrant something like one of these databases. One was an application with double digit million MAU and thousands of RPS on a very shardable data set, which fit Spanner's ideal access pattern and performance profile pretty well, but we paid an absolute arm and a leg for the privilege and ended up implementing a distributed cache in front of Spanner to reduce costs. The other just kept the data set in memory and flushed to disk/S3 backup periodically because in that case liveness was more important than completeness.
In the first case, the database created as many problems as it solved (which is true of any large application running at scale; your data store will _always_ be suboptimal). A fancy, expensive NoSQL database won't save you from solving hard engineering problems. At smaller scales (on the order of tens-hundreds of RPS), it's hard to go wrong with any established SQL (or open source NoSQL if that floats your boat) database, and IMO Postgres is the most stable and best bang for your engineering buck feature wise.
My team bought the "scale down" thing and got bit.
Using Spanner is giving up a lot for the scalability, and if you ever reach the scale where a single node DB doesn't make sense anymore, I don't know if Spanner is still the answer, let alone Spanner with your old design still intact. For one, Postgres has scaling options like Citus. Or maybe you don't need a scalable DB even at scale, cause you shard at a higher layer instead.
I've only kicked the tires, but https://neon.tech is a pure hosted Postgres play. I'd be curious to hear if anyone has used them for a real projects, and how that went.
I mean sure, NoSQL gives you more opportunities to screw stuff up because it's doing less for you. But it can be a reasonable tradeoff in some scenarios anyway
Sure, You can compare Cloud SQL vs Cloud Spanner and RDS vs Dynamo, but it makes more sense to just say "Postgres" and assume that the reader can figure out that it means "Whatever managed postgres service you want to use".
The entire point is that every cloud provider has a managed postgres offering, and there's no vendor lock-in. Though, technically, Dynamo does have a docker image you could run in other cloud providers if it came down to that, you'd get no support for it.
The Free Tier is completely irrelevant here, though. The very reason someone might use Spanner is its excellent scalability. I don't believe there is any reason to use it for smaller projects other than education. The customers who will use Spanners are those for whom CockroachDB is not enough, for example. For everybody with databases that are not that huge PostgreSQL will do just fine.
Very true, but most people do not yet no about scale-to-zero pay-for-what-you-use sql server clouds with a free tier like CockroachDB and neon. They think that you must pay $5 a month to run a sql server, which has been the case until very recently, so they go with no sql options to get the free tier.
Edit: actualy Spanner looks like another CockroachDB. You use sql to interact with it. In which case I can see many people who would want to use this with a free tier for hobby projects. ie. in between education and production development.
oh cmon. this is just 50 QPS. i mean yea obviously for someone having as little as 50QPS is not going to bother with the massive scale and availability in cloud spanner.
there are a TON of applications today reaching 100M+ users in just a month. you are not dealing with 50QPS. oh and you forgot the crazy byte boundaries in DynamoDB.
if you go over a single byte above 1kb, you are charged 2 Read units !
That seems nitpicky. The free tier is a marketing program, not a product.
"Google should offer intro discounts" is IMHO a very valid point (absolutely no idea why this doesn't exist), but it doesn't really speak to whether or not the real product is more or less expensive.
it's a bit different because the free tier for dynamodb is not like the other 12 month limited offer, it's marketed as free forever, so it's not just an intro product, it's something you can run a small business off for free.
I wish I could play around with Spanner for personal/side projects, but a production ready instance starts at $65/mo. DynamoDB can run for ~$0.00/month with per-request pricing.
You can! Spanner has a free trial: https://cloud.google.com/spanner/docs/free-trial-instance. Keep in mind, that per-request pricing isn't free unless you stay under the free tier. So just take a look at what those limits are because going above them means you're not free anymore.
if you are interested in spanner, you might take a look at cockroachdb. esp the production ready serverless offering which is pay for consumption only. crdb architecture is essentially spanner under the bonnet, GIFFE
I'm torn because I have really liked Google offerings in the past (I'm pretty locked in on gmail, I have different things running on GCP already, etc). But I've also been feeling burned a bit by Google suddenly ending services. I had all my domains happily in Google domains until they recently sold it suddenly to Squarespace, who I'm not interested in dealing with. My phone is a Google Pixel and I was using the Google podcast app, but just heard that too is being discontinued and moved to Youtube Music, which is a service I tried and really disliked, so now I need to find a replacement for that too. I didn't personally use some other services, but I know there have been many others ended (such as Stadia for gaming, which made a lot of press at the time).
Those are more minor services in the long run, but it makes me a little nervous to go in again on Google for a critical service. Before I invest my time and effort into using it I have to ask myself "Will Google someday sell off or end the cloud spanner service? Will I be in trouble if they do so?".
As someone in the midst of transitioning an organization to GKE, Google Domains was the first shutdown that truly frightened me - AFAIK the first true B2B IT offering that was unceremoniously shuttered. Domain registration may be a regulatory/reputational minefield - but then so are many of their other cloud offerings, up to and including content distribution. I don't think it's indicative of a larger pattern of shutting down Google Cloud services yet, but it's certainly a yellow flag at least.
> But I've also been feeling burned a bit by Google suddenly ending services.
The products/services that “Google” the search company launches are different than “Google Cloud”. While the discontinuation of Google products is annoying it has nothing to do with Google Cloud products/services. I don’t think Google Cloud abruptly announces discontinuing products/services as they have paid customers.
Regarding Google Domains that is a Google product. The equivalent product from Google is “Google Cloud Domains” which is available to Google Cloud customers.
> Regarding Google Domains that is a Google product. The equivalent product from Google is “Google Cloud Domains” which is available to Google Cloud customers.
Cloud services rely a ton on marketing, B2B relations, and customer support. Google has never exactly been about that stuff, and GCP was suffering. So they pulled in Oracle, MSFT, etc execs, which lame as that sounds was probably the right move for GCP in particular.
And I guess this is the kind of marketing that attracts the customers they want.
Exactly: I have to provision for peak throughput on Spanner. Average throughput is much lower than the peak throughput, so I'm doubtful of seeing savings no Spanner.
(But I bet that Spanner is much easier than DynamoDB to develop with...)
You can scale Spanner up/down based on demand although there is a lag time with it.
I built a system that relies on a high-performance database and tested with both AWS DynamoDB and Google Cloud Spanner (see disclaimer) and was able to scale Google Cloud Spanner much higher than AWS DynamoDB.
DynamoDB is limited to 1000 WRUs per node, and there isn't an obvious way to get more than 100 nodes per table, so you're limited to 100,000 WRUs per table (= 102400000 bytes/sec = 97 MiB/sec = 776 Mib/sec) -- even if you reserve more than 100,000 WRUs in capacity for the table. The obvious workaround would be to shard the data across multiple tables, but that would have made the software more difficult to use.
Google Cloud Spanner was able to do much more than 97 MiB/sec in traffic (though the exact amount isn't yet public), and also was capable of much larger transactions (100 MiB versus DynamoDB's 25 (now it is 100) items * 400KiB of ~10 MiB) which was a bonus.
Disclaimer: The work was funded by a former Google CEO and I worked with the Google Spanner team on setting it up, while I am a former AWS employee I didn't work with AWS on the DynamoDB part of it, though I did normal quota adjustments.
Google runs their tech stack as if it's a startup that builds their CV. Everything is immature, tons of hacks, undocumented features. If you are on their k8s there are tons of upcoming new versions and features that force you to revisit key hacks you put in your infra because of their misgivings. Our infra team keeps tinkering around our infra and it never ends. It's 50:50. 50% of time making sure we are prepared for their shit and 50 % our ambitious infra plans. Good luck with that.
With AWS our bill is 60% of what GCP used to be running 3 k8s clusters.
AWS support is so nice, you can't believe it.
Nah, I don't trust Google with anything. It's a scam. Google's support is horrendous. They refer you to idiots that drag you through calls until your will for life dies. And you're back to the mercy of some lost engineer that may comment on a github issue you opened 20 days ago. We have a bug reported back in 2020 that got closed recently without any action because it became stale and the API changed so much it doesn't really matter. It's that bad.
The billing day is a monthly reminder you're paying entitled devs to do subpar work other companies do a lot better.
No, we don't miss them already.
I wonder what makes us different, I work in europe on video games; AWS’s handling of me when I was at Ubisoft left a really sour taste - when I moved into Tencent/Sharkmob I tried really hard to love AWS as it was the defacto industry standard and instead I was left with a feeling that most of it is inconsistent garbage papered over with lambda functions. I referred to these weird gotchas as “3am topics”; things that I don't have the mental capacity to deal with at 3am and convinced the studio to switch to GCP- which, incidentally they are still extremely grateful to me for doing.
This sounds more like an indictment of the system design than the cloud provider.
What are some of these “3am” topics that made GCP a better choice?
Azure is a dumpster fire from the ground up.
My experience has been the opposite though not without issues, Azure has some of the best corporate and security features of any cloud and it's only getting better. The zero trust model fits in so nicely with their identity platforms it's a sight to behold compared to other cloud providers which likely use some form of AAD or AD DS anyway.
Their support is responsive and they seem to know what they're talking about. (AKS)
Please provide some specifics on your experience?
This reminds me of the fond days of having weekly customers calls. We develop AWS services, and we answer our customer-support calls directly. No middle man. Just techies to techies. And we made promises to customers on the fly, and customers sometimes project managed us.
GAE uptime improved, for a little while. Yeah, we're on AWS now too.
This! They even custom-coded their support portal better than those off-the-shelf vendor like Zendesk. I say this as a Zendesk paying customer.
GCP on the other hand, is a F-tier in support. Almost feel like I need to beg them to get any level of help.
We booked a follow up call in the calendar, I spent good time preparing my notes and requirements for the meeting... and then nobody on their side showed up or contacted me again.
I'm of the opinion that focused products created by smaller teams are better.
Which is on brand with Google. They have no problem launching stuff, and no problem killing stuff. But man, then just get out of the cloud business and focus on what you're good at.
It's actually sort of ridiculous. AWS has the best support I have ever interacted with. I mean, our org certainly pays enough for it but it's so completely unusual in tech, or really any sector to get great support even when you're paying for it.
Every time, I am also proven wrong as someone competent on their side both actually understands my issue and finds a resolution.
One of my pet hates is the (ab)use by repo maintainers of the auto-close-when-stale feature on Github.
What useful purpose does it serve beyond making the repo maintainers look good because they have a low number of open issues ?
It doesn't actually address the issue. Its the virtual equivalent of brushing under the carpet.
My team has a sort of a sandbox where we can use almost any Azure product we want (our IT is supportive and permissive as far as that sandbox goes, which is a blessing), but even then it's just painful in comparison.
AWS is probably better though.
Our reason for going all in with GCP was the k8s. We've been using GCP for 2+ years. The trouble we have is with stability and so many of the features being constantly rolled out.
Our experience was that K8s cost more on GCP than AWS.
Just on LoadBalancers alone, you have tons of tricks that are specific to GCP implementation. And we needed a few extra because you couldn't run all the features we wanted on 1-2 per cluster. For example, we have a 3rd party that required all our requests to always originate and respond back from a fixed IP address. We could only pick one not a range, not a list. This was a hard requirement. The service was important so we had to do it.
It took our team several days to find how to do it using online documentation and support. Tech support was useless. We had one guy in our team that spent 2 days on the phone with a paid, local GCP implementation partner trying to get this problem sorted. Nothing came out of it other than being pitched on our dime a lot of services and architecture we didn't need. Eventually we figure it out on our own. I don't even remember speaking about this when we transitioned to AWS.
Recently was woken up by alert about DNS resolution issues.
GCP rolled out new version of SkyDNS and NodeLocalDNS, SkyDNS reports 99% miss, had to quickly hack it.
This is not the „out-of-the-box” experience you want to have.
I think Google Cloud needs someone like Jeff Bezos as their head: Look what your customers actually want and need and understand their requirements. And they usually want good customer support and want a competent key account manager as well.
When we were looking to migrate our analytics database from on-premise to a cloud alternative we were looking at BigQuery and Snowflake. BigQuery is a great product and we were already deeply invested in GCP as well. However the GCP sales team just couldn't sell BigQuery - they just don't know what old corporations want to hear in a sales pitch. So we went with Snowflake in the end. Not because it's the better product but because their sales team is better.
I'm not sure if the cloud business is actually a priority at Google. If it is then I think they don't understand the mistrust Google is facing when it comes to stable long term support of their products.
TL;DR: they broke something and wouldn't fix it.
This comparison seems to be not exactly fair? Amazon’s 126 million queries per second was purely for Amazon-related services serving Prime Day generating this on DynamoDB, and not all of AWS is my read.
What would have perhaps been a more fair comparison is to share the peak load that Google services running Cloud Spanner, and not the sum of all Spanner services across all of GCP and all of Google (Spanner on non-GCP infra).
I will say that it would show a massive of confidence to say that Photos, Gmail and Ads heavily rely on GCP infra: which would be brand new information for me! It would add to confidence to learn more on how they use it, and if Cloud Spanner is on the critical path for those services.
What is confusing, however, is how in this article "Cloud Spanner" is consistently used... except for when talking about Gmail, Ads and Photos, where it's stated that "Spanner" is used by these products, not "Cloud Spanner!". Like if they were not using the Cloud Spanner infra, but their own. It would help to know what is the case, and what the load of Cloud Spanner is: and not Spanner running on internal Google infra that is not GCP.
At Amazon, practically every service is built on top of AWS - a proper vote of confidence! - and my impression was that GCP had historically been far less utilised by Google for their own services. Even in this post, I'm still confused and unable to tell if those Google products listed use Cloud Spanner or their own infra running Spanner.
> DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 126 million requests per second.
Amazon was very, very clear on this. For Google to use that number without the caveat is just completely underhanded and dishonest. Whoever wrote this is absolutely lacking in integrity.
If Amazon said they didn't use DAX that day I would say they were lying.
The average consumer or startup is not going to squeeze out the performance of Dynamo that AWS is claiming that they have achieved.
In fact, it might have been fairer in Ruby if they didn't hard-code the net client (Net/HTTP). I imagine performance could have been boosted by injecting an alternative.
Worst case scenario, it's Google you're buying, not a random startup etc.
[0] https://www.youtube.com/watch?v=268jdNwH6AM
It’s a pretty big deal if Gmail migrated to GCP-provided Spanner(not to an internal Spanner instance) and sounds like he kind of vote of confidence GCP and Cloud Spanner could benefit from: might I suggest to write about it? It’s easier to digest and harder to miss than an hour-long keynote video with no time stamps.
And so just to confirm: Gmail is on Cloud Spanner for the backend?
link with time-stamp:
https://www.youtube.com/watch?v=268jdNwH6AM?&t=3020
This wasn't the first time Gmail has replaced the storage backend in-flight. The last time, around 2011, they didn't hype it up, they called it "a storage software update" in public comms. And that other migration is the origin of the term "spannacle", because during that migration the accounts that resisted moving from [[redacted]] to [[redacted]] we called barnacles.
I'm not sure? I guess I'm mostly not sure what "gcp infra" means there. The blog post says
"Spanner is used ubiquitously inside of Google, supporting services such as; Ads, Gmail and Photos."
But there's google-internal spanner, and gcp spanner. A service using spanner at Google isn't necessarily using gcp. (No clue about photos, Gmail, etc)
Granted, from what I gather, there's a lot more similarity between spanner & gcp spanner than e.g. borg and kubernetes.
gcp spanner and normal spanner are different deployments of the same code.
I really want to give Google the benefit of the doubt: but it doesn't help that they did not write that eg Gmail is using "Cloud Spanner." They wrote that it uses Spanner.
There's no indication that google is talking about ALL of spanner either? The examples they list are all internal google services, and they specifically say "inside google".
I'm also dubious that even with all of the AWS usage accounted for that DynamoDB tops Spanner if Amazon themselves are only at 126 million queries per second on Prime Day.
Not only this, but practically most, if not all, of the AWS services use DynamoDB, including use cases that are usually not for databases, such as multi-tenant job queues (just search "Database as a Queue" to get the sentiment). In fact, it is really really hard to use any relational DB in AWS. I mean, a team would have to go through a CEO approval to get exceptions, which says a lot about the robustness of DDB.
Edit: It's possible you're limiting your statement specifically to AWS teams, which would make it more accurate, but I read the use of "Amazon" in the quote you were replying to as including things like retail as well, etc.
Not necessarily about volume of transactions, but this is similar to one of my pet-peeves with statements that use aggregated numbers of compute power.
"Our system has great performance, dealing 5 billion requests per second" means nothing if you don't break down how many RPS per instance of compute unit (e.g. CPU).
Scales of performance are relative, and on a distributed architecture, most systems can scale just by throwing more compute power.
Probably dealing with thousands of requests per seconds, but wants to say they're building something that can scale to billions of requests per second to justify their choices, so there they go.
There are many engineering directors at Google.
is that true finally? It sure wasn't in the 2020-2021 timeframe.
AWS and GCP (and Azure, and Oracle cloud, and bare Kubernetes via an operator, and...) support Postgres really well. Just...use Postgres.
ok? and sqlite3 in memory is even cheaper than postgres!
if you can use (and support correctly) postgres then you should use it, obviously there's no point using a globally scalable P-level database if you can just fit all your data on one machine with posthgres.
If I'm writing a chat app with millions of messages and very little in the way of "relationships", should I use Postgres or some flavor of NoSQL? Honest question.
The super-power of Postgres is that it supports everything. It's a best-in-class relational database, but it's also a decent key-value store, it's a decent full-text search engine, it's a decent vector database, it's a decent analytics engine. So if there's a chance you want to do something else, Postgres can act as a one-stop-shop and doesn't suck at anything but horizontal scaling. With partitioning improving, you can deal with that pretty well.
If you're writing fresh, there is basically no reason not to use Postgres to start with. It's only when you already know your scale won't work with Postgres that you should reach for a specialized database. And if you think you know because of published wisdom, I'd recommend you set up your own little benchmark, generate the volume of data you want to support, and then query it with Postgres and see if that is fast enough for you. It probably will be.
PostgreSQL is extremely good at append-mostly data, i.e like a chat log and has powerful partitioning features that allow you to keep said chat logs for quite some time (with some caveats) while keeping queries fast.
Generally speaking though PostgreSQL has powerful features for pretty much every workload, hence the Golden Rule.
My advise to you is to use Postgresql or, heck, don't over think it, sqlite if it helps you get a MVP done sooner. Do NOT prematurely optimize your architecture. Whatever choice results in you spending less time thinking about this now is the right choice.
In the unlikely event you someday have to deal with billions of messages and scaling problems, a great problem to have, there are people like me who are eager to help in exchange for money.
Lots of people like to throw around the term "big data" just like lots of people incorrectly think that just because google or amazon need XYZ solution that they too need XYZ solution. Lots of people are wrong.
If there exists a motherboard that money can buy, where your entire dataset fits in RAM, it's not "big data".
You do have to think about how to model the data in each system, but there are very few cases IMO where one is strictly 'better.'
As with all data storage, the question is usually how do you want to access that data. I don't have experience with Postgres, but a lot of (older) experience with MySQL, and MySQL makes a pretty reasonable key-value storage engine, so I'd expect Postgres to do ok at that too.
I'm a big fan of pushing the messages to the clients, so the server is only holding messages in transit. Each client won't typically have millions of messages or even close, so you have freedom to store things how you want there, and the servers have more of a queue per user than a database --- but you can use a RDBMS as a queue if you want, especially if you have more important things to work on.
I think the truth is that you should use the simplest, most effective tech possible until you are absolutely certain you need something more niche.
Once you have millions of messages, maybe consider moving the data intensive parts out if postgres, if necessary.
The criticism is often that people look for big data solutions, before they have big data.
If you scale out of postgres, you probably have enough users and money that you can fix it :)
But moving to a NoSQL before you have to, might just slow down development velocity -- also you haven't yet learned what patterns users have.
Normally you'd make a decision like this by figuring out what your peak demand is going to look like, what your latency requirements are, how distributed are the parties, how are you handling attachments, what social graph features will you offer, what's acceptable for message dropping, what is historical retention going to look like...[continues for 10 pages]
But if you don't have anything like that, just use something simple and ergonomic, and focus on getting your first few users. There's a long gap between when the simple choice will stop scaling and those first few users.
Having said that I guess I broadly agree with your comment. It seems like a lot of people like to plan for massive scale while they have a handful of actual users.
In the first case, the database created as many problems as it solved (which is true of any large application running at scale; your data store will _always_ be suboptimal). A fancy, expensive NoSQL database won't save you from solving hard engineering problems. At smaller scales (on the order of tens-hundreds of RPS), it's hard to go wrong with any established SQL (or open source NoSQL if that floats your boat) database, and IMO Postgres is the most stable and best bang for your engineering buck feature wise.
Using Spanner is giving up a lot for the scalability, and if you ever reach the scale where a single node DB doesn't make sense anymore, I don't know if Spanner is still the answer, let alone Spanner with your old design still intact. For one, Postgres has scaling options like Citus. Or maybe you don't need a scalable DB even at scale, cause you shard at a higher layer instead.
Also https://www.postgresql.org/support/professional_hosting/
Dead Comment
The entire point is that every cloud provider has a managed postgres offering, and there's no vendor lock-in. Though, technically, Dynamo does have a docker image you could run in other cloud providers if it came down to that, you'd get no support for it.
I think they're directly comparable with this context.
vs AWS Free Tier:
"25 GB of data storage ... 2.5 million stream read requests ..."
https://aws.amazon.com/dynamodb/pricing/
So, there's probably somewhere the lines on the graph cross, but Google's headline seems misleading.
Ha. Remember Gary Bernhardt of WAT fame? https://twitter.com/garybernhardt/status/600783770925420546
> Consulting service: you bring your big data problems to me, I say "your data set fits in RAM", you pay me $10,000 for saving you $500,000.
Edit: actualy Spanner looks like another CockroachDB. You use sql to interact with it. In which case I can see many people who would want to use this with a free tier for hobby projects. ie. in between education and production development.
"Google should offer intro discounts" is IMHO a very valid point (absolutely no idea why this doesn't exist), but it doesn't really speak to whether or not the real product is more or less expensive.
https://www.cockroachlabs.com/get-started-cockroachdb/
Those are more minor services in the long run, but it makes me a little nervous to go in again on Google for a critical service. Before I invest my time and effort into using it I have to ask myself "Will Google someday sell off or end the cloud spanner service? Will I be in trouble if they do so?".
https://steve-yegge.medium.com/dear-google-cloud-your-deprec...
The products/services that “Google” the search company launches are different than “Google Cloud”. While the discontinuation of Google products is annoying it has nothing to do with Google Cloud products/services. I don’t think Google Cloud abruptly announces discontinuing products/services as they have paid customers.
Regarding Google Domains that is a Google product. The equivalent product from Google is “Google Cloud Domains” which is available to Google Cloud customers.
Also sold: https://cloud.google.com/domains/docs/faq
How did Google become like this?
And I guess this is the kind of marketing that attracts the customers they want.
(But I bet that Spanner is much easier than DynamoDB to develop with...)
I built a system that relies on a high-performance database and tested with both AWS DynamoDB and Google Cloud Spanner (see disclaimer) and was able to scale Google Cloud Spanner much higher than AWS DynamoDB.
DynamoDB is limited to 1000 WRUs per node, and there isn't an obvious way to get more than 100 nodes per table, so you're limited to 100,000 WRUs per table (= 102400000 bytes/sec = 97 MiB/sec = 776 Mib/sec) -- even if you reserve more than 100,000 WRUs in capacity for the table. The obvious workaround would be to shard the data across multiple tables, but that would have made the software more difficult to use.
Google Cloud Spanner was able to do much more than 97 MiB/sec in traffic (though the exact amount isn't yet public), and also was capable of much larger transactions (100 MiB versus DynamoDB's 25 (now it is 100) items * 400KiB of ~10 MiB) which was a bonus.
Disclaimer: The work was funded by a former Google CEO and I worked with the Google Spanner team on setting it up, while I am a former AWS employee I didn't work with AWS on the DynamoDB part of it, though I did normal quota adjustments.