Readit News logoReadit News
doxcf434 · 10 years ago
We've been doing tests in GCE in the 60-80k core range:

What we like:

- slightly lower latency to end users in USA and Europe than AWS

- faster image builds and deployment times than AWS

- fast machines, live migrations blackouts are getting better too

- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front

- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account

What we don't like:

- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them

- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them

boulos · 10 years ago
Sorry to hear about your troubles. Are you running with onHostMaintenance set to terminate or are you losing "regular" VMs. If you want to ping me with your project id (my username at google), I'd like to investigate. 100s of VM failures is well outside of our acceptable range.

Also, if it's been a while since your last quota request, we've drastically improved the turnaround time. All I can say is, your complaints were heard and we've tried to fix it. Keep yelling if something is busted! (And yes, I see the irony of the support ticket statement; out of curiosity which support are you on?)

Disclosure: I work on Compute Engine.

jbaptiste · 10 years ago
Maybe there is something special for the member of GCE startup program, but for us the quotas requests take between 1min and 1 hour, where the same requests over aws took a few days, and endless discussions.

Our all experience with the folks over at Google has been amazing compared to the poor level we had with AWS.

Granted we are on a range way lower than yours.

zbjornson · 10 years ago
Ditto -- we've had about five quota requests handled within an hour or two. AWS took about a week for each of two requests.
Amir6 · 10 years ago
Thanks for sharing your experience. Its really helpful!
phoboslab · 10 years ago
Can someone explain to me why traffic is still so damn expensive with every cloud provider?

A while back we managed a site that would serve ~700 TB/mo and paid about $2,000 for the servers in total (SQL, Web servers and caches, including traffic). At Google's $0.08/GB pricing we would've ended up with a whooping $56,000 for the traffic alone. How's that justifiable?

mythz · 10 years ago
Traffic's a luxury tax (along with RAM) that cloud providers assume that big companies can afford to pay if they're getting that much traffic.

Outside of the Cloud Providers Traffic is dirt Cheap, Hetzner includes 30TB traffic inclusive in their dedicated server i7 Quad-Core Skylake 64GB DDR4 RAM, 2x250 GB SATA 6 Gb/s SSD for 39 euro /month:

https://www.hetzner.de/us/hosting/produkte_rootserver/ex41ss...

If you don't want to be shaped after you exceed 30TB, Hetzner charges €1.17 per additional TB, so 700TB would come to €783.90 total.

Whereas ScaleWay include unlimited traffic in their bare metal servers starting from 12 euro /month:

https://blog.scaleway.com/2016/03/08/c2-insanely-affordable-...

FranOntanaya · 10 years ago
It sounds kind of inefficient though since different business types have extremely different bandwidth needs. So it's going to tax business by sector rather than by their ability to sustain it.
nitrogen · 10 years ago
How many people share that Hetzner server for 39euro/month?
developer2 · 10 years ago
I don't get it. Google says they're going after the big fish in the industry by claiming they have amazing pricing. The servers look good, I'm ready to jump on board.

$120-$230 for first TB of egress bandwidth depending on where it goes. No thanks, I can get 2 TB for < $20 elsewhere.

These bandwidth costs leave small businesses, and individuals like myself, staying with the smaller competition. I suppose their reasoning is they can chase after that single $400-600 million contract. One major client like that is worth as much as ten million of us little guys paying $50 each. The big cloud providers exist to to serve gigantic enterprises. The rest of us are a drop in bucket and not worth the effort.

erikpukinskis · 10 years ago
When pricing a value-add you want to price it linearly, with a volume discount, but such that after the volume discount the line is still steeper than the base cost curve. That way growing customers feel like they are getting a deal vs small fish, and are incentivized to use as much as they need, but you still drive your margins towards what the market will bear, provided your volume is growing. That curve will eventually squeeze out some of your biggest customers, but you can avoid this by cutting deals for them, e.g. Google with Apple.
Matt3o12_ · 10 years ago
Traffic is not important for every use case. If you run a store for physical items, how much traffic are you going to use? This is probably going to be less then 5% of your AWS bill, so you don't worry too much about it. If you host heavy images, big JS files (which you shouldn't do anyways) or offer downloads, you should probably use a CDN anyways. For big downloads, latency is not really that important as long as you get proper download speeds, so the CDN is going to be a lot cheaper.

Nor everybody wants to run the next Netflix or Dropbox in terms of bandwidth consumption. Even if you did, keep in mind that Netflix does not host the videos in the cloud.

mwfj · 10 years ago
They are pricing themselves out of the market for traffic-intensive small-fish operations that way though.
saganus · 10 years ago
Wow, ~700 TB/mo? That does sound like a lot.

What kind of site would serve that volume of traffic and not have 56k for operating expenses? I mean, I can think of a few examples like Wikipedia maybe, since they are non-commercial and such, but for a commercial business? Maybe 4chan moves that much without a lot of revenue I would think, or maybe... imgur? but not really sure, I mean, it would seem like they could get that amount easily via ads alone.

What was the use case here?

Also, I think that 56k for traffic alone kind of depends on context. I mean, how much does Netflix pay for serving their volume of traffic?

What I'm saying is, isn't 700 TB a month something that would probably be very expensive no matter the context? Just storing 700TB would cost a lot, no?

I'm really curious about your use case here.

phoboslab · 10 years ago
Image hosting community site - notably without shady popup/layer/scam ads, which probably was the reason for the relatively small income. For a two person team that only worked part time on it, it still made good money.

The total dataset was just about 3TB, so storing it was not an issue.

virtuallynathan · 10 years ago
700TB/mo is about 2Gbps - on the open market that should be under $1000/mo. Netflix's total cost is probably below $0.25/Mbps. $56,000/mo would get you over 100Gbps of committed capacity from any major provider (or a mix).
alainv · 10 years ago
Why is it "not have 56k for operating expenses"? Something that can be had for $2k is not something a healthy business spends $56k on. You should be able to find a better use for those $650k that year.
zifnab06 · 10 years ago
1TB/mo is roughly a constant 3mbit/s. So an estimated 2.1gbit/s. I recently had a 1gbit line from he.net quoted at $500 in Seattle.

Deleted Comment

bhouston · 10 years ago
One popular high traffic site I know build their own CDN to serve the large majority of their data by renting dedicated machines in OVH, Hetzner, etc. I can not remember their actually datacenters for their own CDN but they were not CloudFront or Google Cloud Platform.

Supposedly this has saved them immense amounts of money.

erichocean · 10 years ago
If your servers are efficient enough (and this is not hard to do these days), it's easy to get bandwidth-limited on a per server basis, i.e. your server could handle more traffic, but you've maxed out the bandwidth available to that particular server.

If you can load balance at the client, then you can "talk" to any server at the edge and don't need a router or proxy, so the net result is that you are only paying for whatever bandwidth comes with your OVH (or whatever) boxes. Effectively, you're buying bandwidth and the computer/storage/power/rackspace/etc. that comes with that bandwidth is free.

And yeah, it's ridiculously cheaper than AWS or Google's Cloud Platform to do things this way.

bsder · 10 years ago
> Can someone explain to me why traffic is still so damn expensive with every cloud provider?

Because The Cloud(tm) IS cheaper--when you start and don't have any real bandwidth or CPU usage.

Whereas, every colocation facility I have quoted wants you to commit to a minimum of $500 for some partial cabinet. So, The Cloud(tm) wins the contract and gets to bill in increasing amounts when usage finally goes up.

Finally, how many real system administrators still exist who can provision your systems, configure the network, and understand how to connect everything to the network without getting p0wn3d? If you don't have that person, you can't escape The Cloud(tm) even if you wanted to.

manigandham · 10 years ago
> Finally, how many real system administrators still exist

... a lot? Has there been some shortage of network/infrastructure people lately?

vidarh · 10 years ago
"The cloud" does not mean you don't need real system administrators. I see time and time again companies get bitten by this. Overall devops efforts to run this well on AWS or GCE in my experience tends to be higher than provisioning dedicated systems because you have so many artificial limits imposed on you by the providers that makes things harder.

E.g. your example: Understanding how to connect everything to the network without getting hacked is far easier when your private network is physically wired to a separate switch, and your public network is physically behind a firewall and there's no configuration mistake in the world you could do that would change that, so the problem-space to get basic levels of security is reduced to configuring the firewalls correctly.

Still plenty of room to shoot yourself in the foot, but in my experience far less so than having people configure their own networking on AWS.

As or pricing, yes, if you want to do colo, the initial costs are higher. But dedicated rented servers with monthly contracts are also typically far cheaper than AWS for anything that stays up for more than ~1/3 or so of the time (obviously depends on the hosting povider). If you regularly spin up lots of instances for a short period of time, you should use AWS. But the moment you stop spinning them down again, it's time to rent capacity somewhere else.

ChuckMcM · 10 years ago
Perhaps it is like the gas stations that sell gas for $4.99/gal when others sell it for much less. It's only worth their while to sell it if they make a healthy margin so they only sell to people willing to pay that much.
mrmondo · 10 years ago
Storage is also a lot more expensive from 'cloud' providers, people often forget to look at the performance and redundancy and simply look at 'per gb' costs.
Swannie · 10 years ago
Indeed. The IOPs numbers for the cheaper VMs are not so great.

You need IOPs? Suddenly you are paying for a premium instance type.

You want replication and/or geo-redudancy with that? Now we're talking $$$ :D

baddox · 10 years ago
Seems like the obvious cynical answer is that they do that to encourage you to use more of their services.
mikecb · 10 years ago
Their CDN interconnect lowers that pricing to ~$0.04/Gb (US).
amazon_not · 10 years ago
That's still very expensive. Wholesale rates for bandwidth are a fraction of a penny per GB.
virtuallynathan · 10 years ago
That is still about $13/Mbps, or 26x transit pricing.
rs999gti · 10 years ago
How much do staff salaries and data center rentals add to the cost per server and per GB?
amazon_not · 10 years ago
Why do you assume you need a staffed data center to get cheaper bandwidth?

Just buy dedicated servers or VPSes, no datacenters or staff needed. The hosting provider takes care of the servers, staff and the datacenter.

johansch · 10 years ago
That is a key question I have been pondering myself.

One theory of mine (perhaps uninformed; I'm not really a networking expert) is that because of the dynamically configurable nature of their systems, they need to use routers rather than relatively dumb and cheap switches at almost every level - in order to have flexible networking and still maintain isolation between customers.

This could get quite expensive if you have to pay Cisco/Juniper for this. If this is true Google will have quite an edge with their software defined networking here, I would guess.

mikecb · 10 years ago
No, they use whitebox switches and software defined networks to control. See https://www.youtube.com/watch?v=n4gOZrUwWmc [Edit: oops, fixed!]
Spooky23 · 10 years ago
SDN is changing the model here, and Google is way ahead. In an enteprise, you can use VMWare to do a lot of the stuff you are blowing big bucks on for Cisco/Juniper on and use switches with higher density.

SDN is going to turn the cost structure on its head -- I wouldn't want to be a network guy now, easily 60% of tasks are getting vaporized in the datacenter.

superbaconman · 10 years ago
Google's really ahead on the networking front, and other cloud providers are following suite. Networking hardware is super cheap now. When you couple that hardware with open source software networking gets cheap.
virtuallynathan · 10 years ago
Large networks like Level3, Cogent, Telia, etc all use big-iron routers (Cisco/Juniper) and will sell you traffic for under $1/Mbps.
atm0sphere · 10 years ago
pretty sure (at least) AWS builds their own network hardware. I remember reading something a while back that said they found it magnitudes less expensive than buying enterprise hardware, with better performance as they went about the affair as scientifically as you'd expect them to.
EdHominem · 10 years ago
> How's that justifiable?

What, morally?

airza · 10 years ago
Sounds like you should start your own cloud hosting service! I bet you could make a killing.
markgavalda · 10 years ago
We are consolidating all of our cloud services at Google Cloud and couldn't be more happier. We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.
jacobwcarlson · 10 years ago
> We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.

All the of the success stories I've heard about Google Cloud are from companies using significant resources. Why hasn't Google gone after startups? Perhaps I'm missing something but a turnkey package of computing, analytics, and advertising seem like a no-brainer.

boulos · 10 years ago
We are! We give $100k to vetted startups that aren't already big: https://cloud.google.com/startups
mikecb · 10 years ago
I use it for a bunch of personal projects and bill between $15-$30/mo.
Artemis2 · 10 years ago
How is the reliability? I want to like GCP but I have never trusted their services in general.
hasch · 10 years ago
I can't speak for the OP ... but from what I've seen, it's extremely good. consistent fast performance and their proprietary "live migration" really stands out. besides really good raw machine speed, the inter-networking is also far superior.
kennethh · 10 years ago
How did the change impact you? More control, lower cost?

Deleted Comment

rdl · 10 years ago
They've been a heavy Azure user too. Probably more than AWS.

I'm glad there's now at least 2 and probably 3 competitors for public cloud infrastructure. So many things were at risk, including adoption of public cloud in general, when it was a sole source monopoly from Google (OpenStack/Rackspace/etc. was basically stillborn, and VPSes aren't the same thing, nor was VMware ever really credible for public cloud)

Neither GC nor Azure are as comprehensive as AWS, but together at least one of them is usually a viable alternative for any given deal.

tracker1 · 10 years ago
Google has some really interesting features, closer to docker, so some better mobility options from private/vps to google, and back. They seem to have some of the best compute options out there, and tend to perform above the others in a lot of ways.

Azure's services are imho a bit easier to use, at least from my limited experience, mostly vm's, queues, tables and hosted sql.

AWS has so many options and services it's hard to keep some of them straight... Lambda is really interesting imho, and some of their options for data storage are compelling to say the least.

Joyent's Triton/Docker option is really interesting, but their pricing model just seems too much for what they're offering. I do hope that they have success in terms of selling/setting up private clouds though... there's a lot of big companies that would be much better off with their solutions.

enraged_camel · 10 years ago
>>OpenStack/Rackspace/etc. was basically stillborn

What's wrong with Openstack/Rackspace?

paulryanrogers · 10 years ago
Feature creep if I recall correctly. Though Openshift is an interesting implementation.
redwood · 10 years ago
Yea big news. We all benefit from competition here
pori · 10 years ago
Can someone provide a little context towards this exodus from AWS to Google Cloud? I understand in DropBox's case that they (questionably) need their own infrastructure for cost saving. But then there's Apple and Spotify suddenly changing over. What's the advantage?

I have a fear that this trend among large companies is going to trickle down to smaller ones and independent devs. Considering these "Cloud Wars" I can see stories like continuing with different providers. Ultimately, a scenario could occur where one year, one provider is king. Then the next, everyone decides they need to migrate to the next big thing. That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.

outside1234 · 10 years ago
There is no exodus. There are a lot of companies moving to multi-cloud, which makes sense from a disaster recovery perspective, a negotiating perspective, and possibly from cherry picking the best parts of each platform.

This is what Apple is doing. They use AWS and Azure already in large volume. This move adds the #3 vendor in cloud to mix and isn't really a surprise.

pori · 10 years ago
Thanks for the answer. That makes a lot of sense. I guess To some degree, I did know this. But the media has been portraying these moves as a complete move, hence the whole "exodus" hype. It bothers me still, because this rhetoric may lead to scenario I described above for smaller companies.
pbarnes_1 · 10 years ago
Mmm... I think you'll be seeing them push more AWS/Azure stuff onto GCP. :)

Deleted Comment

jmspring · 10 years ago
This.

If you can afford it, multi-cloud makes sense. Reduced risk to outages, etc.

Personally I've seen smaller companies also doing the same.

campers · 10 years ago
Its more catch-up than an exodus, but also overtaking in some ways. Short version I'd say pricing and data processing (DataFlow, DataProc and especially BigQuery) Their core network infrastructure is more advanced, and Live Migration is pretty nice too.

Long version the recent posts of Spotify and Quizlet’s moves to GCP dive deep into their reasons why.

https://cloudplatform.googleblog.com/2016/02/Spotify-chooses...

https://cloudplatform.googleblog.com/2016/03/free-online-lea...

rodgerd · 10 years ago
> That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.

Heaven forbid cloud computing move beyond the current 1960s "You only buy from IBM" model, especially if it's "only" benefiting the customer.

imperialdrive · 10 years ago
I so sick of EC2 rogue 'underlying hardware issues' and EBS volumes dropping dead... AWS Console status will say everything is 'Okay' even when there are major problems - it's a joke... I wonder to myself, is it because I recently migrated (December 15) over and they are starting to buckle? Really a bad experience. At this rate I'll be looking at Google next month, or going back to Colo (25 servers, 100TB) so not much, but still worth doing right.
soccerdave · 10 years ago
I've had ~25-30 instances running for the past 3 years and only had 1 or 2 instances have hardware issues, never had issues with EBS. Running on us-west-2 but it seems like more issues happen in us-east-1.
imperialdrive · 10 years ago
I'm on Virginia zone D... the other day a 15TB EBS went down, even with status as good. Their explanation, which took a lot of time/energy to get, was that the 2nd replicated copy had a failure, and when rebuilding from the 1st good replicated copy (primary) it suffered an unknown error taking down that copy as well... I was upset to say the least.
mmmBacon · 10 years ago
1 or 2 failures out of 30 is a really high failure rate for HW.
dantiberian · 10 years ago
Would be interesting to know what kind of discounts Apple got on this. It's a massive PR win for Google, the kind I expect they could give $100m for. Apple is also notorious for getting a very sharp price from their suppliers, so the combination suggests there were some steep discounts.
vidarh · 10 years ago
The public cloud prices bear no relation whatsoever to what large customers pay.

I know people spending less than $1m/month that are paying ~25% of the public prices on one of the top three cloud providers. Frankly, I'd be surprised if Apple is paying more than 10%-15% of the public pricing.

The reason is that anything above that, and you can save massively by going to more traditional dedicated hosting.

massemphasis · 10 years ago
Apple was very happy that Google gave S. Korea a very public smackdown at their own game with the AlphaGo AI software. If only I were kidding.
fidget · 10 years ago
My guess is that it's pretty much just BigQuery. No one else seems to be able to compete, and that's a big deal. The companies moving their analytics stacks to BQ and thus GCP probably make up the majority (in terms of revenue) of customers for GCP
mikecb · 10 years ago
Given how cheap bigquery is, there would have to be a lot more bq-only customers than customers that use other services. And given how seamlessly the different products work with each other, any beachhead product like bq will quickly garner more product usage.
kodablah · 10 years ago
I doubt it. Not only does Apple (maybe?) run one of the largest Cassandra clusters in the world, but surely they wouldn't leverage cloud provider features over open source alternatives for fear of vendor lock-in.
lern_too_spel · 10 years ago
Cassandra and BigQuery are not at all comparable. BigQuery's open source competitors are Impala, Presto, and Drill.