Readit News logoReadit News
nodesocket · 10 years ago
Very nice. A direct competitor to the entry level DigitalOcean 512MB servers which cost $5 a month. The great thing is that as your grow, you won't outgrow AWS, which is not necessary true with DO.
_hyn3 · 10 years ago
Only if you're not actually using that instance!

You can't even log into the instance at all for that price. You also have to pay for bandwidth (very expensive) and EBS (not bad, unless you want it to be as fast as DO's local SSD).

Setting aside that the DO droplets are far more bang for the buck in terms of consistent CPU usage, and also (to be fair) that AWS offers far more scalability options, bandwidth alone is shockingly expensive at AWS.

At 9 cents per GB, you'd be looking at $90/mo for bandwidth alone for what DO gives you included in the $5/mo.

The $80/mo Digital Ocean server w/ 5TB of transfer would cost $450 at EC2 JUST IN BANDWIDTH ALONE!!

And, at DO, if you exceed that bandwidth allocation that's included for free, it's $20/TB vs $90/TB at EC2 (both charge outgoing only).

These are costs that most people don't consider but really amount to poorly explained fine print costs. (Others include Glacier restore pricing, intra-region bandwidth, etc.)

(disclaimer: I'm an AWS certified SA and my SSH key manager startup, Userify[1], is an AWS partner, but even so we are still forced to use DO for a large part of our infrastructure -- especially where bandwidth is concerned.)

1. https://userify.com (cloud ssh key management)

copperx · 10 years ago
> You can't even log into the instance at all for that price.

What? could you please explain how would you manage the instance if you can't log in to it?

tw04 · 10 years ago
There's a reason they are posting the profit margins they have been. AWS != cheap.
lqdc13 · 10 years ago
Why not just get two dedicated servers and have one be used as a backup?

For $60/mo you can get 4Tb space free, basically unlimited bandwidth and a permanent 4XL-equivalent instance that makes scaling up to that point extremely easy with no need to configure autoscaling, s3, inter-service communication, cloud formations or complicated fail-over strategies. Unlike AWS instances that die frequently, a dedicated server is much less likely to malfunction.

manyxcxi · 10 years ago
Totally unrelated to this post, but THANK YOU for Userify. I don't know how you guys are making any money yet (or how you plan to), but you make my life so much effing easier!
jsmthrowaway · 10 years ago
And to use 5TB in a month you have to sustain over 15 megabit/sec 24 hours a day. DO is betting on you not doing that (and in aggregate nobody does), because in any other context sustaining 15 megabit is about as expensive as Amazon. Quote a 100 megabit drop some time if you don't believe me.

If every VPS on Digital Ocean and Linode actually used their quota, they wouldn't have uplink capacity to support it and the network would fail. That's overselling. Numbers that high are extensive overselling. Linode has 40 Gbit links (at least they used to), and give 2 TB (6 Mbps) to each small Linode, meaning about 7,000 Linodes actually using the quota would saturate the link. They have a few more than that. Do the math.

The bandwidth quotas are sales stuff so that you will say exactly this in threads like these, and it's amazing how well it works.

jsmthrowaway · 10 years ago
No real reason to use Linode any more, either. The AWS skill set is valuable in the industry and using it for your personal stuff gives you a leg up careerwise.

My AWS skills (VPC architecture, Direct Connect, boto, etc.) have been a big hiring plus for me in the past. Since nobody relevant uses Linode for production any more due to security and other issues, probably time to move personal blogs and accrue transferable skills.

People also compare the big holy wow bandwidth/SSD/CPU offered by DO and Linode without accounting for the fact that you could almost never use it all at the price point without hitting (a) capacities of the instance and (b) annoyed employees. Jeff's data on how successful their CPU quota accounting is backs this up. If you're pegging a core in your workload you should probably own the core.

Seriously, think about it. DO offers you a terabyte on the low end plan. You have to sustain 3 megabit/sec every second of every day to hit that. Maybe in some scenarios you are, but almost nobody running personal gear is doing that. The higher levels are even more ridiculous. But the sales stuff works: people are concerned about it in this thread, merely the potential to use instead of paying for actual.

copperx · 10 years ago
> nobody relevant uses Linode for production any more due to security and other issues

I'm not in the industry (I'm in higher education), but this is news to me.

Are there any use cases where Digital Ocean/Linode would be better than AWS? A small blog, website, perhaps?

nly · 10 years ago
How is less than half the storage and 1/500th the bandwidth competitive?

The problem with dropping a care-free VPS (what DO is for, and what you're comparing against) on AWS has always been transfer. Even my least used personal VPS eats 25-30GB/month in outbound. How much does that cost at AWS? Another $5? and then you're constantly worried about your usage month on month. I'd rather give $10-20/month to DO/Vultr/Linode to begin with and get more memory as a bonus.

AznHisoka · 10 years ago
I'd rather pay slightly more for an ovh server and not have to worry about nickeling and diming... And whether my system will crash overnight because it needs more than 1 gig of RAM
brianwawok · 10 years ago
Is it? You can peg the DO cpu. You can only use 5% of the cpu here.

I think this is cool don't get me wrong. Lots of uses for a cheap low use server. But this is pretty niche.

yid · 10 years ago
From TFA:

> The t2.nano offers the full performance of a high frequency Intel CPU core if your workload utilizes less than 5% of the core on average over 24 hours.

This means you get the full CPU if you have a bursty workload, and really is no different from what DO's policy is:

> We do not set a cap on CPU usage by default but we do monitor for droplets doing a consistent 100% CPU and may CPU limit droplets displaying this behavior.

In the EC2 case, the CPU throttling policy is just explicit.

[1] https://www.digitalocean.com/community/questions/cpu-usage-a...

msravi · 10 years ago
I'm actually contemplating moving out of AWS to DO. I signed up for a 1-year m3.medium reserved instance with AWS (Singapore region) sometime in May when the reserved instance cost was ~$50/month (it's now $35/month but I'm stuck with their old rate).

This doesn't include bandwidth and IO requests.

I might understand charging separately for bandwidth, but IO requests? I end up making between 75M to 95M IO requests, and it adds a good $8 or so to my bill. Plus, there's EBS which for 60GB (not SSD!) adds another $6.

So all in all, for ~$65 I'm getting a single core, 3.75GB RAM, 60GB storage, and am stuck with their old pricing for a year, whereas with DO for $40 I can get 2 cores, 4GB RAM, 60GB SSD and no "IO request" costs!

It's DO as soon as my 1-year is done.

jamiesonbecker · 10 years ago
EBS w/ SSD doesn't have I/O request fees attached. DO is still a better deal overall.

For crazy dynamic workloads with moderate bandwidth usage, AWS makes good financial sense. Also, S3 and SQS are phenomenal. (As are DynamoDB, Redshift, Lambda, and Kinesis, but these can get expensive fast.)

AWS really has simply awesome technology, and they can afford to charge what they want for it, and I do believe they strike a pretty good balance most of the time. Also, the Free Tier is really very nice and fair. I'd just like to see the cost model get a bit more transparent and a bit lower in fees, especially on the low end.

ersoft · 10 years ago
Did you check out Google Cloud ? I moved all my company's instances to them and I ended up cheaper than on Vultr (similar pricing with DO), while also benefiting to more features (storage, mysql, etc) and better uptime.

For 28$/month you can get a similar instances as on amazon (1CPU and 3.75GB RAM) and 60GB SSD is another 10$. 60GB standard storage is 2.40$.

I'm impressed that Google Cloud is adding a couple of new features every month and I can see them a strong AWS competitor soon.

brianwawok · 10 years ago
I recently went DO to GCE when my app graduated to big boy status, and have been happy. Maybe grass is greener ;)

if you only need like 1 server though DO is great. Security and some things in bigger systems you get to roll your own (i.e. no storage, private IP aint really private, etc).

josegonzalez · 10 years ago
IO requests on ebs-backed volumes go across the network, so I guess they are charging for internal bandwidth in some sense.

Deleted Comment

asselinpaul · 10 years ago
Curious as to how one would outgrow DO.
nodesocket · 10 years ago
No centralized firewall (security groups), no ELBS, no EBS, no autoscaling, no ElastiCache, no RDS... Do I need to continue?
frik · 10 years ago
Are there any dedicated server hosters that can compete with AWS/DO in pricing and flexible web interface? (I heard good things about Hetzner, but I would need something in US).
sagargv · 10 years ago
How does Amazon implement instances like t2.nano? If there are 40 t2.nano instances on a quad core physical machine, what happens when all the users want 100% CPU, even if it's only for 10 minutes? Are instances automatically migrated to a different physical machine if this happens?
fulafel · 10 years ago
EC2 can't do migration, that's one of the things Google Compute has over EC2.
phamilton · 10 years ago
Why not, I wonder? Xen has supported migration for close to a decade now. I assume they don't use a compatible storage layer?
supersan · 10 years ago
I've tweeted to you twice and asked your customer support about this too but I have never gotten a reply to it. So I'm asking you here.. When are you gonna allow reserved instances for Indian customers?

Right now I cannot purchase reserved instances and so my bills are much much more than what others are paying.

P.S. Here is the screenshot when I try to purchase. There has been no update for 1 year now.

http://i.imgur.com/oZAHMt5.jpg

Zombieball · 10 years ago
In all honesty I don't think it's in Jeff Barr's authority to release roadmap plans or definitive answers to questions like this. My experience in the past has been he has always been helpful in connecting you to individuals who may be able to answer these sorts of questions. So I suppose it doesn't hurt to ask.

Just don't get your hopes up :)

supersan · 10 years ago
Whom should I contact then? Their customer support just replies with a canned reply :(
msravi · 10 years ago
Huh? How is it that I'm able to buy reserved instances? I'm in India and I've bought reserved instances. It's been a pretty bad decision to buy (https://news.ycombinator.com/item?id=10741966), but I can. I can't sell them though, so I'm stuck.
supersan · 10 years ago
I'm not sure. See, the screenshot above. The confusion about this is the most difficult part of it and Amazon support just replies with a canned reply.

That is why I was hoping someone like jeffbarr could shed some light on this.. Looks nice it was just another vain attempt to get an answer from them :(

matt4077 · 10 years ago
This would be excellent if you could accrue more than the 72 minutes worth' of CPU credits. At least my use case is 'low traffic, with the occasional link from a high-traffic site'. These happen every few months, not every three days. But they also last 36 hours or so, not 72 minutes. Total CPU usage is similar, but it's distributed differently.
cperciva · 10 years ago
There's a trade-off between how much they allow you to spike and how long they allow you to spike for. They could have 20 t2.nano instances packed onto one CPU; the longer you can spike for, the more likely it is that another instance is going to end up spiking at the same time as you. I'm sure Amazon has looked at the CPU-usage behaviour of millions of EC2 instances -- quite likely across hundreds of billions of data points -- and picked this as a reasonable tradeoff.

For your use case, I think autoscaling is probably the answer. Keep one t2.nano running continuously but tell EC2 to spin up a new t2.large if you get a burst of traffic.

matt4077 · 10 years ago
Yeah, they obviously put more thought into this than I ever will. Although they offer instances with 40 (v)CPUs so I'd assume they could technically run 800 nanos on one of those and efficiently deal with such spikes? Either way, I suspect their reasoning is more economical than technical, in that they know full well that I can afford more than 5$/mo.

(which I'm taking to linode for now, but oy vey, AWS, we'll always have Paris)

lotyrin · 10 years ago
It still is excellent, because it reduces both the granularity and baseline cost of an autoscaling setup which is probably what you want for your use case.
jdub · 10 years ago
That's what CloudFront[1] is for. :-)

[1] ... and an async web server without hitting a dynamic backend on every page load...

siscia · 10 years ago
Since we are talking about cloud provider, any experience with https://www.scaleway.com/ ?

Their price is so much cheaper and give to you bare metal ARM, but I can count on them ?

Aissen · 10 years ago
Been using them for a while, it's great. I'm running a non-mining bitcoin client (basically just sync-ing the blockchain), it runs well. Honestly a better proposal than Amazon's IMHO.
sagargv · 10 years ago
They seem to have run out of ips/servers and aren't accepting new users. So, I wouldn't host anything serious with them just yet.
amock · 10 years ago
I've been running a server for a few months without any issues. The CPU is really slow, but the network seems ok.
siscia · 10 years ago
Nice, what you mean by "slow" ? Just to have an idea, what is your workload ? Serving and rendering html ?
7ewis · 10 years ago
Looks like the t2.nano isn't included as part of the Free Tier, seems a bit strange?
yeukhon · 10 years ago
For my use case, t2.nano would be great but I need better network throughput.
imperialdrive · 10 years ago
wow, I must be doing something wrong... I have a single c4.4xlarge running a single wordpress site at max 200 active users and still running into cpu bottleneck... 1K+/mo sheesh.... I use t2.medium for DC's for 5 users and 5 servers lol... please let me know what a nano is good for?

not to mention, I need multiple 10+TB volumes, and magnetic only go to 1TB, so I need to span, and spanning breaks after 4+ drives, so now I'm at SSD, and that's costing me 1K/mo for each copy and I need many, sigh

i miss buying my own supermicro systems at ~10k/ea, hosting a full rack at at a colo for 1k/mo and then just setup correctly and check-in once a month

now, amazon is getting 15k/mo from me, but i must say, my back thanks them for 0lbs of equipment to lift, so probably worth it for a hernia surgery.

yeukhon · 10 years ago
My use case is I need to download lots of files, each partition at around 5MB (for now, but planning to increase the size and measure an good chunk size). But I am constantly downloading files, so I need stable and consistent throughput.
payamb · 10 years ago
Try putting Varnish in front and use Redis for DB object caching, it reduce a good amount of pressure off the server.
amazon_not · 10 years ago
Sounds like you need a few dedicated servers. More performance, lower costs.
nullspace · 10 years ago
Woah, c4.4xlarge does sound heavy for 200 users!
keehun · 10 years ago
Do you cache at all?
tszming · 10 years ago
Always add $49/month (the basic support plan) if you want to compare AWS with DO/Linode, don't just look at the $5/month instance charge.
ranman · 10 years ago
? meaning DO/Linode support for a $5/month customer is good? I've not had positive experiences with either... but then again my experiences with AWS (non-business) support haven't been so positive either.
nickjj · 10 years ago
I've had pretty good support on DO's $10/month plan and I imagine their $5/month plan would be the same.

1-5 hour turn around times on initial responses and I've gotten them to enable things like the recovery partition due to an instance running out of disk space.

Sometimes it takes a few responses to get a resolution but at that price point I can't really complain because they've even helped resolve issues that were my fault.

tszming · 10 years ago
Good or bad is very subjective, but in AWS without a support plan, there is no way to contact them except you create a thread in a public forum asking for help. If you are lucky your post is not being ignored then someone will reply you maybe within one day (then back and forth might cost you one or two days to get issue resolved luckily). In DO/Linode you can create a support ticket to them and usually they respond within a few minutes, this is a huge difference.