Readit News logoReadit News
dzdt · 8 years ago
AWS sells optionality. If you build your own data center, you are vulnerable to uncertain needs in a lot of ways.

(1) your business scales at a different rate than you planned -- either faster or slower are problems!

(2) you have traffic spikes, so you to over-provision. There is then a tradeoff doing it yourself: do you pay for infrastructure you barely ever use, or do you have reliability problems at peak traffic?

(3) your business plans shift or pivot

A big chunk of the Amazon price should be considered as providing flexibility in the future. It isn't fair to compare prices backwards-looking: where you know what were your actual needs and can compare what it would have cost to meet those by AWS vs in house.

The valid comparison is forward looking: what will it cost to meet needs over an uncertain variety of scenarios by AWS compared to in-house.

The corallary of this is, for a well-established business with predictable needs, going in-house will probably be cheaper. But for a growing or changing or inherently unpredictable business, the flexibility AWS sells makes more sense!

IgorPartola · 8 years ago
You are right with this analysis. The only thing to add is why Amazon still isn’t the only choice and why sometimes it is in fact better to not use it.

Your comment makes it sound like unless you know your growth pattern, can predict your spikes, don’t plan to pivot, and you know these things perfectly, you will lose. That’s not the case. The reason for that is that you have a quite significant cost difference between AWS and DIY. DIY is cheaper by a significant enough margin that you might be able to buy, say, double the capacity you need and still spend less. So if you made a mistake of up to 2x your growth and your spikes, you are still fine.

Even if you are a small operation, you still have the option to lease hardware. Then your response time to add new servers is usually hours, not days like if you were to go the full own-your-hardware route.

As an exercise, you can try to rent and benchmark a $300/month server from e.g. SoftLayer and then compare that against a $300/month EC2 instance. Chances are, you will be blown away by the performance difference.

guitarbill · 8 years ago
I don't think anybody will argue that if you have a specialised workload (CPU heavy, storage heavy, etc) there's definitely cost savings at scale for DIY over cloud.

But the calculation is harder than that. People are terrible at estimating the ops personnel cost for DIY. Turns out it's hard running with minimal downtime, or engineering hardware + software to be fault-tolerant. It's hard to put a price on dev tools/APIs/doco/the whole eco-system.

Especially for that last reason, I have never been "blown away" by Softlayer, even when their instances were way beefier than anything AWS/GCP offered. YMMV.

mylons · 8 years ago
you make a good point, but my anecdotal story from experience says you really need 10-30x your average capacity for peak times instead of double. an ideal world would be you have your "cheap" datacenter that you use most of the time, and somehow extend into the cloud for the true rare event when you need 30x. I'm not sure how feasible that is, though
chiefalchemist · 8 years ago
Put another way: Would Dropbox have gotten off the ground and progressed as it did without AWS?

Probably not.

Outgrowing AWS is a great problem to have.

hashkb · 8 years ago
Not to mention they basically resold a single AWS feature. All they had to do was rebuild S3 (and ec2).
chiph · 8 years ago
> A big chunk of the Amazon price should be considered as providing flexibility in the future.

A recent place I worked at didn't understand this. They were going to the cloud as a strategic move because they didn't want to run a data center any more. Their computing needs were stable and predictable - no "Black Friday" sales that needed triple the servers for a short part of the year. They were going to end up paying far more for the total computing cost than if they had just kept buying blade servers and VMWare licenses.

peteretep · 8 years ago
I've worked somewhere that didn't care. Business SaaS platform, that moved to the cloud because maintaining servers was dull and taking up too much developer and devops headspace. The entire yearly spend was about as much as the salary of an ops person.

I'd argue that companies where the hosting costs are the primary, or even a significant cost, are a small minority.

opportune · 8 years ago
I worked somewhere where the resident DevOps/Sysadmin guy would rather repeatedly write me emails about using an extra 10GB of disk for a data science project than just buy more storage. And this was on an instance with like half as much memory as disk available. There are some people in this industry who just have zero business sense.
san_at_weblegit · 8 years ago
This is a pretty common trend in lots of places. These kind of decisions are driven for things other than computing needs. Maybe to look cool/cloudy/nextgen ...
manigandham · 8 years ago
You can design for and get steady-state discounts on cloud too. It's not only about flexibility but also maintainability and ops overhead. The increased spend on cloud is still usually less than the cost of a sysadmin/IT team and the handling of inevitable hardware and DC issues.
gota · 8 years ago
This is what happens when directors and C-level folks get a reputation bonus from being able to talk about how they "led a migration to the cloud" in their previous company.
Spooky23 · 8 years ago
A lot of times companies can yield short term savings because of how they depreciate or are taxed on assets like server rooms and equipment.
zimbatm · 8 years ago
It's good to understand the different price dynamics and useful to have some rules of thumbs to avoid long cost calculations.

For most startups I would actually advise to start with Heroku, which is even more expensive than AWS (it is built on top of AWS). But you save on devops and building the CI/CD pipeline. For a small team it can make a big difference in terms of productivity.

For worker workloads like CI, renting dedicated hardware (like Hetzner) is usually cheaper and produces more reliable results. spot instances also work but have less reliability dues to machines cycling. The main factor for keeping everything under AWS would be egress bandwidth pricing or if the workload spikes are bigger than 2x.

hinkley · 8 years ago
I am still holding my breath for the tools to mature to the point that people can run their own data centers again with less overhead. My lips are getting a little blue but I see some hopeful signs.

For number 2 especially, there have been some cool projects for efficiency gains when different parts of the organization experience different traffic spikes. Like Netflix, where transcoding spikes precede viewing spikes, and they pulled off a load balancing coup to reduce peak instances.

I think the right thing for a tech company to do is to run their own data center in at least one town where they operate, and use cloud services for geographic distribution and load shedding.

The temptation to reduce your truck numbers below long term sustainable rates is too great, and so is lock-in. The best hunt I think you can do for the long term health of your companies these days is to hold cloud infrastructure at arm’s length. Participate, but strongly encourage choosing technologies that could be run in your own server rooms of another cloud provider. Like Kafka, Kube, and readily available database solutions. Make them avoid the siren song of the proprietary solutions. We have already forgotten the vendor lock-in lessons of the 90’s.

portent · 8 years ago
A good option can be to use your own data center for base load, and on top of that use AWS for traffic spikes. That way you still have the flexibility to adapt quickly but at a lower cost, once you reach a certain scale.
gaius · 8 years ago
use your own data center for base load, and on top of that use AWS for traffic spike

Much easier to go hybrid on Azure, at least until AWS and vSphere integration is ready for prime time

foobiekr · 8 years ago
AWS only provides optionality if you don't get deeply into the AWS services. If you do you find that optionality is absent.

A startup I talked to not too long ago has a revenue of around 1M a month - pretty great. Their Amazon bill is around 1M a month - not so good. Their entire architecture is one AWS service strung into another - no optionality at all.

breischl · 8 years ago
I think you're talking about a different kind of optionality than the parent post. That was talking about option to expand/contract capacity, change infrastructure, or all the usual "elasticity" that they tout.

You're talking about vendor lock-in. Which is a totally valid point and something to be aware of, but basically orthogonal.

AFNobody · 8 years ago
> (1) your business scales at a different rate than you planned -- either faster or slower are problems!

> (2) you have traffic spikes, so you to over-provision. There is then a tradeoff doing it yourself: do you pay for infrastructure you barely ever use, or do you have reliability problems at peak traffic?

> (3) your business plans shift or pivot

Anyone at any real scale has a multi-datacenter setup and AWS is effectively just a very elastic datacenter you can tap into. You could just as easily tap into Google Cloud or Azure. You do not need to operate 90% of your business in AWS to use AWS.

> The corallary of this is, for a well-established business with predictable needs, going in-house will probably be cheaper. But for a growing or changing or inherently unpredictable business, the flexibility AWS sells makes more sense!

Its still cheaper to go in-house with a multi-DC setup in anything but a single DC setup in AWS with very few nodes.

Architecture for a decent sized business should be setup with 3 DCs in mind /anyway/. You have two primaries, you scale <insert cloud provider> as needed and only leave some DB servers there most of the time.

mancerayder · 8 years ago
Sure, but isn't the REAL other selling point of AWS aside from an elastic virtual datacenter that it automates away sys admin tasks and ultimately people?
ape4 · 8 years ago
So perhaps the best approach is to use your own servers for the guaranteed minimum load you will see and use more expensive AWS for everything else.
tjoff · 8 years ago
Over-provision will still be cheaper than the cloud.

Also, in my opinion the know-how and full control of the complete stack is paramount to maintain a quality service.

In addition, you can do both. Tying yourself to one provider is an unnecessary risk and tying yourself to the cloud is not ideal.

alexbanks · 8 years ago
The obviously unanswerable question would be: would Dropbox have been able to succeed as a startup while building their own data centers in-house?

I assume that answer is a no.

FatalBaboon · 8 years ago
I would bet they still burst-out to AWS.

For me the logic is more like: get cheapers machines (be it in-house or with cheaper alternatives), that run kubernetes for example, and monitor them with Prometheus.

If you run out of capacity, defined by whatever metric you fancy from Prometheus, start EC2 machines for the burst.

Every month, re-evaluate your base needs.

thathappened · 8 years ago
Amazon was a phase. They bridged the gap of DOTcom and new thinkers not interested in starting a devops team to begin.

Now DO, vultr, and other options exist to fill the gaps more.

Aws was never the solution to content distribution or infrastructure replacement. Just a company smart enough to notice big gaps and not afraid to fill them to get the ball rolling, maintain and progress, then move on.

blitmap · 8 years ago
I really like this comment. Amazon offers a lot of services (with ridiculous names) but you are too right that you're paying for the flexibility to pivot as your needs change. Dropbox did well to recognize its needs after a time using AWS.
Someone1234 · 8 years ago
There's two good reasons to use a service like AWS:

- You're too small for efficient economies of scale on your own equipment (i.e. AWS is cheaper when considering total cost of ownership).

- You need to scale rapidly to meet demand

The second one is largely a data issue, if you have enough historical data on your customers, their habits, usage, and so on then scaling becomes predictable and even when it isn't you could offload only part of your infrastructure to a cloud vendor.

What's interesting is that several companies that I know which rely on AWS/Azure/et al aren't on it for either of the two above stated "good" reasons.

They are large businesses and do almost no automated scaling. They're on it for what can only be described as internal political limitations, meaning that they are on these services to remove the politics of technology infrastructure, one less manager at the top, a shorter chain of communications, an external party to blame when something does go wrong, and issues like HR/benefits/etc for infrastructure employees is outside the scope.

In effect they view themselves as "not a technology company" so look to employ the fewest technology employees as they can. Even in cases where technology is paramount to their success. It is very interesting to watch, and I'm not even claiming they're "wrong" to handle their infrastructure this way, just that it is hard to quantify the exact reasoning for it.

mdasen · 8 years ago
Those are definitely two good reasons, but I think there's another: you are starting up and your core competency isn't object storage.

I'd argue that the core competency of Dropbox is its easy syncing. Dropbox wanted to get that to market quickly. If they had spent the time building out a data storage solution on their own, it would have meant months or years of work before they had a reliable product. Paying AWS means giving Amazon some premium, but it also means that you don't have to build out that item. It's not only about economies of scale and rapid demand. It's also about time to market.

I think it's a reasonable strategy to calculate out something along the lines of "we can pay Amazon $3N to store our data or store it ourselves for $N. However, it will take a year to build a reliable, distributed data store and we don't even know if customers want our product yet. So, let's build it on Amazon and if we get traction, we'll migrate."

S3 is a value-added service and creating your own S3 means sinking time. Even though data storage is very very near to Dropbox's core competency, it's really the syncing that was the selling point of Dropbox. To get that syncing product in front of customers as fast as possible, leveraging S3 made a lot of sense. It gave them a much faster time to market.

As time went on, they had traction, and S3 costs mounted, it made sense for them to start investing in their own data storage.

It's about figuring out what's important (the syncing is the product) and figuring out what will help you go to market fast (S3) and figuring out how to lower costs after you have traction (transitioning to in-house storage).

Yes, a lot of companies use cloud services when they don't need them. However, Google Cloud's compute pricing is reasonably similar to DigitalOcean (with sustained usage discounts) and from what I hear these companies will often negotiate discounts. AWS can seem a bit pricy compared to alternatives, but I'm guessing that Amazon offers just enough discounts to large customers that they look at the cost of running their own stuff and the cost of migration and Amazon doesn't look so bad.

Still, when you're trying to go to market, you don't want to be distracted building pieces that customers don't care about when you can rent it from Amazon for reasonable rates. You haven't even proven that someone wants your product yet and your time is better spent on delivering what the customers want rather than infrastructure that saves costs. As you mature as a company, the calculus can change and Dropbox seems to have hit that transition quite well.

baddox · 8 years ago
A slightly similar other reason could be that your server costs are so low relative to the size of your business that the ROI of moving off of AWS doesn’t make sense for the conceivable future.

I imagine this could be the case for a lot of smaller tech startups, and perhaps even some larger companies that don’t have significant web traffic or ongoing real-time computer services.

Something like Gusto might be a good example. I would guess that each of their paying customers (employees of companies using them) leads to only a handful of initial or yearly setup tasks and maybe a handful of web requests per month, but represents solid revenue.

The most obvious counterexamples would be any company with persistence real-time services, like Dropbox or Mixpanel, or companies with a huge number of web requests with a very small rate of conversion to revenue, like an ad network or an ad-supported social network or media site.

candiodari · 8 years ago
> Yes, a lot of companies use cloud services when they don't need them. However, Google Cloud's compute pricing is reasonably similar to DigitalOcean (with sustained usage discounts) and from what I hear these companies will often negotiate discounts. AWS can seem a bit pricy compared to alternatives, but I'm guessing that Amazon offers just enough discounts to large customers that they look at the cost of running their own stuff and the cost of migration and Amazon doesn't look so bad.

Of course, the dollar amounts dropbox saved are compared to those negotiated prices.

In my experience it isn't so much the storage price itself, but the network transfer that makes AWS absurdly expensive.

Most companies don't need something like S3. They can perfectly suffice with one server, maybe using RAID-1, or just using backups. Data corruption mostly happens through logical errors anyway and nothing in S3 will protect you from that.

jwhitlark · 8 years ago
Agreed. De-risking non core competencies is very rational, especially while iterating business strategies.
bistro17 · 8 years ago
>However, Google Cloud's compute pricing is reasonably similar to DigitalOcean (with sustained usage discounts) and from what I hear these companies will often negotiate discounts.

Given the storage usecase of DropBox what would be the percent of saving if DropBox indeed went with Google or Digital Ocean?

tyingq · 8 years ago
Google's network egress pricing vs Digital Ocean is much higher.
hinkley · 8 years ago
Because I’m in the industry I’ve seen software companies run by the legal department or the HR department. But I’ve also witnessed both software and non tech companies where the IT department controls everything. (I know that sounds weird but a company selling on premises software, for instance, should NOT be cowtowing to the IT dept).

In every one of those cases the group in charge has been the wrong group and it really makes you wonder who has been asleep at the wheel so long that this has occurred.

Maybe outsourcing to AWS for a couple of years is a good way to reboot the organization. Cheaper than slowly going out of business. When the fad dies down you start hiring people back who are a little more humble and cheaper than AWS.

dalbasal · 8 years ago
Ultimately, technical limitations (is, problem would exist regardless of people) are not always (or usually) a limiting factor for companies working. Human limitations like having the wrong group in charge are the limiting factor. Ways around that are good, potentially.
kev009 · 8 years ago
Well said but I disagree that AWS is some magical solution it's in your context just a scapegoat to catalyze change in general or a mulligan with brownfield and greenfield. It'd be far more effective to simply hire competent people to fix the organization, but that requires competent people somewhere in the governance of an organization (i.e. a board or major stakeholders).
heisenbit · 8 years ago
I often wonder about the value of outsourcing and a lot of the deals I see are related to generational change whether it is management, technological stack or lopsided age structure. Not all are acknowledging the realities injecting conflicting targets into the execution.
gaius · 8 years ago
They're on it for what can only be described as internal political limitations, meaning that they are on these services to remove the politics of technology infrastructure

Heh, at one previous company it would take 6-9 months to provision a new VM and 12-18 months for a physical. Those entrenched IT organisations absolutely deserve to get their lunches eaten.

zxcmx · 8 years ago
Was about to make this exact comment. Endless forms to fill out to move a network cable.

Buuuut at my current workplace I am starting to see some slowdowns in doing AWS stuff as "departments" get more involved.

Like the "cloud team" does the account but networks must provision the VPC, and there's a "gateway review board" that gets involved if you define a new network egress or ingress etc etc.

I feel like many of the early advantages of cloud in enterprise are going to get eroded as the paper pushers catch on and "add value by defining process".

bigcostooge · 8 years ago
This is my current life. Testing an idea costs $25M. It’s absurd, and most of my time is spent filling out forms or explaining the basics of the web to idiots.
smooc · 8 years ago
Hiring manager here.

While your reasons are valid you are missing an important one:

Resource scarcity: the engineers that I need allocate to infrastructure I rather have working on user facing features and improvements. Talent is scarce, being able to out source infrastructure frees up valuable engineering time.

This is one of the main reasons, for example, that Spotify (I’m not working for them) is moving to google cloud.

vidarh · 8 years ago
I do devops consulting, and typically I end up with more billable hours for AWS setups than bare metal or managed server setups. The idea that AWS takes less effort to manage is flawed. What instead tends to happen is that more of it gets diffused out through the dev team, who often doesn't know best practices, but often nobody tracks how much of their time gets eaten up by doing devops stuff when there's nobody explicitly allocated to do the devops tasks.

There can be advantages in that the developers more often can do the tasks passably well enough that you can spread tasks around, but if it's not accounted for a lot of the time people are fooling themselves when it comes to the costs.

When it comes to large companies like Spotify, the situation changes substantially in that they're virtually guaranteed to pay a fraction of published prices (at least that's my experience with much smaller companies that have bothered to negotiate).

kev009 · 8 years ago
I would phrase it differently as competence scarcity.

It doesn't take many people to do soup to nuts businesses.. think WhatsApp's 50 engineers, Netflix's 100 person OCA team (if you don't think OCA is a standalone product you don't know much about technology business) doing 40% of the Internet by volume.. the vast majority of people just aren't very good that work in technology. Business governance grossly underestimates the effects of mediocre performance.

So the real question is why aren't governors trying to encourage WhatsApp and OCA style businesses, it's far more cost efficient. I understand why an organization itself empire builds, misaligned incentives.

gaius · 8 years ago
the engineers that I need allocate to infrastructure I rather have working on user facing features and improvements.

Cloud services still need configuring and managing. You're saving on 2-3 days upfront on racking and cabling, on boxes that will last at least 3 years, probably longer. So if this is your only reason, you're making a false economy, eventually the costs of an undermanaged cloud will bite you (e.g. VM sprawl, networking rules that no-one understands, possibly-orphaned storage, etc).

bartread · 8 years ago
How does that work out? In situations, granted only a small handful, I've worked where AWS has been extensively used what ends up happening is everyone ends up doing "devops". Whatever that might mean in a formal sense, the way I see it playing out in reality is that every engineer ends up having to spend time tinkering with the infrastructure, so does it really free up valuable engineering time?

For personaly projects, I use AWS and Azure (though am likely to migrate everything to a single box at OVH because it turns out to be cheaper for better performance - go figure) and it's made a certain amount of sense (up to now). At work we use dedicated hardware, because the cloud can't deliver the bang per buck.

noway421 · 8 years ago
Uber does the same. It the good old "buy it or build in-house" question. If total cost of ownership is more, you want to change.
dalbasal · 8 years ago
Well...first off, I don't think that "human factors" like wanting one less manager, are always that bad.

Those two reasons (for needing aws) are the technical problems that aws can solve, but you can't solve in-house. That is, they are not even solvable on a boardroom whiteboard, where the board pretend everything is just a matter of resource (money) allocation.

But (imo) most of the things that companies fail on... it's not because it is impossible to do a good job. They fail for less inevitable reasons.

In any case, I actually like the strategy where you try to be good at the things that you're good at but minimize things you need to be good at.

Dropbox knew that aws was expensive. If the numbers here are real, then in housing would have been a byug efficiency gain (on said boardroom whiteboards) for years. Makes sense when you consider what Dropbox does.

I assume they paid this price because it let them avoid being an infrastructure company. They would have had to be a very good infrastructure company. Why introduce this additional failure point, limiting factor or whatnot?

I (maybe you too) have seen the kinds of problems that aws solve be the limiting factor in a bunch of companies. The fact that they're technicaly solvable is almost academic, at some point.

tldr, sometimes it's good to solve certain problems with money.

dvfjsdhgfv · 8 years ago
Arguably the most popular reason is "everyone else is using it". This is what I hear all the time. It usually goes like: "Why aren't we using AWS?" "Well, because it's a few times more expensive than our current infrastructure" "How can it be? If it's true, why all other companies are using it?" "They fell for the hype, just like you". Then we sit down, we do some calculations, we check different scenarios, and it always turn out AWS is more expensive than dedicated servers. In the past people also used a very strange argument that with AWS you don't need to pay the IT staff anymore - but I no longer hear this argument, I think most companies already realized how ridiculous it is. The most recent fad is the "serverless revolution" with some people claiming this time for sure the IT staff is unnecessary since the app developer can take care of everything. Good luck with that fantasy.
carlsborg · 8 years ago
IMO there are many many more reasons:

- you need to iterate rapidly with with scale and reliability. if you have the right expertise this becomes very quick to setup. it lets you focus 100% on product iterations.

- you need (predictable) on-demand compute for crunching large amounts of data or running some batch jobs. it just doesn't make sense to do this on your own equipment.

- your median cpu utilization is low, so you want to save costs and you move to a serverless architecture, effectively moving the cpu utilization you pay for to 100%.

- But most importantly AWS isn't just compute and storage primitives: AWS has a vast array of abstractions on top of the cloud primitives: managed clusters, machine learning services, virtual desktops, app streaming, CI/CD pipelines, built-in IAM and Role based access control, to name a few !

mooreds · 8 years ago
Don't forget all of the cloud providers offer a robust sets of APIs and SDKs to automate provisioning of all these services. That is valuable apart from the actual service.
rocky1138 · 8 years ago
It's also easy to hire people who have AWS experience. It's getting harder and harder these days to find anyone who has actually seen the inside of a data centre.
Bombthecat · 8 years ago
The lock in is even worse than people think.

Now more and more companies are locked in in Amazon.

Hard to find a good old Datacenter Admin.

smueller1234 · 8 years ago
I'd like to challenge your implicit assertion that if due to knowing customer behavior patterns, scaling is predictable, therefore (this is the part I'm assuming you're implying) scaling is doable (or even easy).

The counter argument I have is that at different sizes of operations, completely new skills become important, so you and your staff are left behind.

Example: my previous employer became large enough in terms of hardware footprint (~>1M cores) that it started getting difficult to find commercial colo space. How good are software and systems engineers at electrical engineering? :)

vidarh · 8 years ago
The alternatives aren't either AWS or host yourself. You can rent managed servers for a fraction of the price of AWS instances.

Granted, if you need 1M+ cores, you're going to be dealing with humans most places (including AWS) to get the best deal possible, and that also means the cost differences can change fairly substantially (e.g. the instances I know of that are in "ask us" territory are not paying anywhere near published prices)

That said 1M cores is not that much. Depending on your needs it's as little as "just" 500 racks. Plenty of managed providers will be happy to provide customized service to e.g. design and/or manage a setup for something that size.

goatherders · 8 years ago
I've sold and competed against AWS in the last 4 years and you hit the nail on the head. "Someone else to blame" drives a lot of these decisions. It's also important to note, additionally, that AWS has created an excellent marketing machine for their services. I've sold AWS instances to companies that would have been fine with a rack in the closet. But they've heard about the cloud and saw a press release where a competitor was going to "the AWS" and...the checks just write themselves.

As a tech enthusiast I love what's possible with AWS, Azure, GC, etc. As a salesperson I don't mind selling these services (although the margins stink compared to selling VPS or dedicated). But there is a lot of cloud-overkill going on out there.

nasalgoat · 8 years ago
When I did the calculations, the break-even point on AWS versus on-premises was about three servers - at that point it was cheaper to go with your own physical hardware.

The big reason for most people is CAPEX vs. OPEX - even if it doesn't make financial sense in a dollar amount, it does in an accounting sense. Investors don't like to see big CAPEX numbers but seem fine with large OPEX ones.

bonesss · 8 years ago
> Investors don't like to see big CAPEX numbers but seem fine with large OPEX ones.

If things go pear shaped large OPEX numbers resolve themselves as OP-erations get slimmed and shut down. Large CAPEX numbers, in the same situation, resolve themselves through liquidation and tears...

More importantly, OPEX comes from next years profits yielding a business I can loan against. CAPEX comes from last years profits, increasing the amount of loans I need to get it together.

It's the difference between thinking about short term profit margins and thinking about asset growth over time. Throwing a lot of optional cash today at a problem is better business than being forced to throw non-optional cash at a problem whenever the problem is feeling problematic. It's also quite freeing in terms of M&A.

tr0ut · 8 years ago
+1 Throw all other rational out the window if OPEX is what works. So here we are migrating to AWS..
kristianc · 8 years ago
They absolutely are a technology company, but compute (EC2) and storage (S3/Glacier) is a utility, like power supply. This wasn't the case back when you needed capacity planning, but today with dynamic provisioning and cheap storage, it is.

No one tries to build their own power station, or make their own laptops. They're better off using engineering resources on higher order stuff, unless, like Dropbox, the margins / TCO you are getting on your storage is an absolutely huge deal.

pbhjpbhj · 8 years ago
Even domestic premises have solar with grid top-up (and they can sell back to the grid often too). Like having baseline server resources on your own hardware and AWS - or whatever - for handling high demand.
Symbiote · 8 years ago
But people do build their own power stations or hardware if they're big enough.

They might outsource most of the work, but then the gap between generating and using power is much more clearly defined.

nraynaud · 8 years ago
There is an opportunity cost to switching too: you don’t know your gain before switching is realized (hence you don’t know over how long to amortize the project), and the switching project could fail altogether. So there is a conservative argument to be made here.

Deleted Comment

bufferoverflow · 8 years ago
The first one is not a reason. If you're really small, you're much better off renting servers from a cheaper competitor, cloud, or even VPS.
joeevans1000 · 8 years ago
There's another good reason: avoiding AWS sticker shock, which is real. Better to spin up an OpenShift instance and know what's going on with prices. God knows what gets spent on AWS resources that have been forgotten or, more likely, no one wants to run the risk of turning off. AWS has become very expensive for startups once you're past their short freebie intro phase.
viraptor · 8 years ago
Why would it be different with OpenShift? Either you know what you're running or you don't. Tagging on AWS gives you per dept/team/app cost split. If you don't use something like that you'll be lost in OpenShift as well...
avip · 8 years ago
Every startup I know got 100-500K$ in aws credits so your "short freebie phase" is dead long in practice.
ChuckMcM · 8 years ago
This isn't surprising. At Blekko where I ran operations we did the math not once but twice (once when we got our series C to show that it made sense, and once when we got acquired by IBM to show why moving it into Softlayer was going to be too expensive). If you can build reasonable management tools and you need more than about 300 servers and a bunch of storage, cost wise its a win to do it on your own.

The best win is to build out in a data center with a 10G pipe into Amazon's network so that you can spin up AWS only on peaks or while you are waiting to bring your own stuff up. That gives you the best of both worlds.

fizwhiz · 8 years ago
So, rent the spike a la hybrid cloud?
vidarh · 8 years ago
Yes. The beauty of it is that building in the capability makes hosting your own even more cost competitive. Because while you might have to plan for 2x or even 3x your average to handle normal peaks if you don't have anywhere else to send traffic, if you can spin up cloud instances to handle spikes you can plan for far higher utilization rates for your own equipment.
sah2ed · 8 years ago
From reviewing the thread's discussion, it looks like businesses turn to cloud-based infrastructure for a number of reasons:

1. To outsource non-core activities to experts and reduce risk, for firms that see IT as a cost center. A cost-cutting measure.

2. To provide dynamic capacity for mature businesses that experience anticipated workloads that are short-lived (seasonal or computational needs). A cost saving measure.

3. To provide dynamic capacity for new ventures growing in popularity i.e. fluctuating capacity requirements. This saves on large upfront infrastructure costs when long-term viability hasn't been established. A risk management measure.

4. To be described as "innovative" because peers are doing it, for firms that see IT as a revenue center (in industries that view such investments as a source of differentiation). A form of virtue signalling.

5. To make the accounts look good to investors by accounting for it as OPEX instead of CAPEX [0]. A seemingly irrational but valid reason. High OPEX numbers are easier to justify and more importantly, can be pared down with less friction than CAPEX, if things go south from intense competition for instance. Another risk management measure.

[0] https://news.ycombinator.com/item?id=16458863

jeffwilcox · 8 years ago
Data centers aren't cheap, so unless you have the economies of scale to offer R&D investment and stock-based compensation to your employees to build a modern cloud DC, good luck with that... done right you can save operating expenses, but it'll take a huge investment that would not scale for others.
pathorn · 8 years ago
I strongly disagree. Datacenters are super cheap compared to EC2. (I'm not talking building your own: start by leasing space from existing datacenters). There are a surprising number of places where you can go and lease a rack or ten or a whole room and be up and running in a couple of months.

I make the case that colocating pays off at just about any scale, assuming you have $10k in the bank, have a use for at least 40 cores and are able to pay upfront to handle anticipated scale.

Hurricane Electric has prices online of $300/mo for a rack. On AWS, a single full c4 machine (36 threads) costs $1.591 per Hour x 24 x 30 = 1145/mo -- this is more than the cost of running a whole rack with 40 machines. Decent internet can be gotten for hundreds per month.

Ok, so how about buying your own machines? E5-2630 with 20 threads is $700 x 2 = $1400 + motherboard + disk + ssd brings it to several thousand, so it will pay off in at most 6 months, and we're not even talking bandwidth or storage costs. Depending on the application you could be looking at a payoff after 2-3 months.

Worried about installing or remote management? IPMI, iDRAC, etc included with basically every server make this a piece of cake.

The only good case for cloud are if you may suddenly scale 10x and can't predict it; don't have $10k in the bank; or don't have 1-2 months to order machines and sign a contract for rack space.

viraptor · 8 years ago
What you're missing in that comparison is an extra engineer (or two) who can deal with power needs, networks hardware config, firmware updates, plans for rolling hardware, stock of replacement drives, seamless base system updates, dealing with the platform (virt or containers) itself and other things that the managed cloud gives you included in price. Ah, and they need to be available on call. Hardware may be cheap - people aren't.
wahnfrieden · 8 years ago
This is a good comparison with EC2, but doesn't directly address the comparison with many other AWS products.
dekobon · 8 years ago
Nowadays, you don't need to do that yourself to run a cloud because you have companies like Joyent offering fully managed private clouds at a fraction of the cost of AWS. They will rack and stack, manage the cloud software, provide escape plans so you aren't locked into them as a vendor and provide architectural guidance to your app teams.

DISCLOSURE - I am a Joyent employee.

godzillabrennus · 8 years ago
Even Facebook leverages colo facilities as some of their POP's. Still, I think it's common that somewhere near the mothership these big guys build a facility from the ground up.
toomuchtodo · 8 years ago
If the revenue is there, it’s a no brainer to build out your own data center or acquire prebuilt space. Cloud provider margins become your cost savings (clearly, as evidenced by this article). You’re going to need infrastructure people regardless, whether it’s on-prem expertise or AWS/Azure.

It’s disingenuous to inflate the compensation for datacenter employees as a boogie man, or to wave away a “modern cloud DC” as a Herculean undertaking. There rarely is any R&D required, and stock compensation isn’t always necessary.

jeffwilcox · 8 years ago
Fair points, appreciate the insight. I think living in a tech bubble city clouds my thinking a bit much.

The only real example I have good knowledge of (1 of 1 cases) is a tech friend who worked for a firm acquired by a company wanting to build out data centers, and in the end everyone left after their acquisition stock awards vested, before really adding value to the corporation and delivering on the promise, because the company did not have a "tech r&d" comp plan in place to keep the subject matter experts employed. In the end the project and the DC expansion fell short.

bluedino · 8 years ago
So it only takes how many exabytes of storage for it to be cheaper than S3, by only < $40M?

For all this talk about how expensive S3 would be for a filesystem, and how poorly suited AWS is for this kind of thing, Dropbox has seemed to make it work just fine.

_pdp_ · 8 years ago
Well we removed all our servers from AWS and replaced them with lambda functions and dynamodb tables which resulted in 4.5 reduction in cost and increased performance by multiple factors. I suppose it all depends on what you are building and how you are building it. If you run servers I think it is no secret that AWS is not the cheapest option around.
smt88 · 8 years ago
Did the switch to DynamoDB make a big cost difference? I've never really thought about the cost of Dynamo vs. RDS as being huge, but honestly I don't know.

For most people (including every commercial project I've ever worked on), the time-saving and safety benefits of relational schemas was far greater than any theoretical infrastructure savings.

guitarbill · 8 years ago
> Dynamo vs. RDS

This is so dangerous even to type out :) In some, limited cases, Dynamo can replace tables in an RDS, and completely outperform it, too. I'm a big fan, but it's fundamentally different from an RDS, and you can get burned, badly.

Oversimplified, DynamoDB is a key-value store that supports somewhat complex values. If you have large values, use S3 instead [0] - I think that's a good way to think of it, a faster S3 for loads of small records (with a nicer interface for partially reading and updating those records).

If you need to look up on anything but the primary key, be careful, costs can get out of control by having to provision extra indicies. If your primary keys aren't basically random, you'll run into scaling issues because of they way DynamoDB partitions. If you need to look at all the data in the table, DynamoDB probably isn't the right technology (it can, but scans absolutely tank performance = $$$).

[0] https://docs.aws.amazon.com/amazondynamodb/latest/developerg...

bonesss · 8 years ago
I think looking at something like DynamoDb as a wholesale replacement for anything that needs a relational schema is... well begging the question a little.

DynamoDB as a hugely scalable limited-scope data-source in your app are likely where you'll find the optimal cost/scalability point. By using Dynamo for scalable read-heavy activity your let the rest of the app be barebones and 'KISS', preserve competencies & legacy code, and retain the benefits of relational schemas. Dynamos scaling then becomes your cloud-cost tuning mechanism.

By way of example(s): if your editors all use RDS and you publish articles to DynamoDb you could be serving tens of millions of articles a day off a highly non-scalable CMS. If all your reporting functions pull from DynamoDB you could be serving a huge Enterprise post-merger while using the same payroll system as pre-merger. Shipping tracking posted/grabbed from Dynamo, purchasing logic on the best Perl code you could by in 1999 ;)

_pdp_ · 8 years ago
The biggest part of the bill are the dynamodb indexes. The lambdas are a tiny fraction in comparison. That being said, you can avoid using as many index as we do. We did it because we wanted our lambda functions to be as pure and as micro as possible.

If you are going to do joins and that sort of things, forget about dynamodb. It cannot do that and for a good reason. That being said, our architecture is mostly SPA so the lack of joins is solved at the client - there are just more calls to services client-side but the affect of that is still cheaper and faster product to run and maintain.

ransom1538 · 8 years ago
This is the future. Server code just just be 'hooked' into an infrastructure. I am looking at Fargate by aws. I think this would basically end all devops [puppets, servers, etc]. It is basically a simple automated hosted Kubernetes - and development is easier than lambda since you just run a docker file. I avoid dynamodb and use RDS (mysql) though since I can get out reports quicker.
jbverschoor · 8 years ago
You mean it's the future from 20 years ago? EJB was exactly this
fefb · 8 years ago
Do you have some solution for http triggers? Like your own gateway? Because I found the AWS Gateway expensive to trigger Lambda by http events.
encoderer · 8 years ago
Last year I wrote about the AWS spend of my SaaS business, Cronitor [1]. I couldn't imagine building a service like this without modern cloud providers, but it is no wonder to me why AWS generates all of Amazon's profit.

Essentially our migration over the years looks like:

1) Moving to EC2 from a VPS, The Ec2 instance with same specs is notably slower and you need to add a second instance where one worked before.

2) Moving to a managed service like RDS after running the service on Ec2, managed service with same specs is notably slower and you need a second instance where one worked before.

In the end, it's worth it, in the RDS example you're getting millisecond replication times, point-in-time database recovery, hot failover, etc. But still, it would be great just once if you got the same performance from an RDS 2xl as you'd get running your own DB on a 2xl of your own.

[1] https://blog.cronitor.io/the-aws-spend-of-a-saas-side-busine...

betterworldb · 8 years ago
> it is no wonder to me why AWS generates all of Amazon's profit.

Going out on a limb here but I'm guessing that Amazon makes a good amount of profit from their online retail store.