Also gives you 224 physical CPU cores (448 logical cores).
Requires a 3-year reservation, and "the effective hourly rate for the All Upfront 3-Year Reservation for a u-12tb1.metal Dedicated Host in the US East (N. Virginia) Region is $30.539 per hour."
Works out to $267,512.88 per year, or $802,538.64 for the 3-year term. I wonder how that compares to building your own on-premise host with that much RAM (obviously, there's operational costs to consider as well).
Also, don't miss the last line: "We’re not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019."
> $802,538.64 for the 3-year term. I wonder how that compares to building your own on-premise host with that much RAM
This was discussed a couple days ago in a different thread [1], when 4TB was the EC2 limit.
$400k for the current-gen (224 cores) or, as a sibling comment [2] notes, $250k for the previous CPU generation (which can use twice as many DIMMs of half the density and only 192 cores of that).
> obviously, there's operational costs to consider as well
We can estimate an upper bound on this, given that the current-gen system has N+2 5x1600W 96% efficient PSUs, so 5kW max. If you're paying a colo $.50/kWh, that's another $66k over 3 years worst case.
Realistically, though, CPUs with a max TDP of 1640W, DIMMs (generously) 700W, leaves plenty of room for fans, SSDs and other overhead before getting to 3kW or $40k, and that's still assuming running full-bore the whole time.
There are obviously also ancillary costs to AWS, such as data transfer and EBS.
If you're already running your own hardware, it sure does look more attractive to pay $400k now and $40k over 3 years than to pay AWS $800k All Up Front (now). If not, perhaps it's more attractive to hand-wave away that $360k as saving the "hassle" of hiring someone who knows how to run equipment in a datacenter.
One thing missing from this analysis is the networking cost. The EC2 instance comes with 14gb/sec of dedicated EBS bandwidth and 25gb/sec of network bandwidth. That implies a 40gb NIC, which will require a 40gb top of rack switch and enough cross-sectional bandwidth in the datacenter fabric to handle however many of these machines they plan for (at whatever oversubscription ratio, if any).
That's a pretty non-trivial amount of expense to setup a network with enough capacity to support a machine like this fi you want to do it yourself.
I know you're joking, but that operation is often disk-io bound, not cpu or memory bound... and the disk is still EBS, you'll not get much better performance on that than on an m5.xlarge I bet, unless you use a grep regex that needs to gobble lots of data into memory as part of the search.
HANA in memory database that can serve both OLTP and OLAP workloads with sub second response times. The cost of the server is dwarfed by the licence cost.
One of the neat things of instances w/ this scale of memory is that we can once again start avoiding the perils of distributed computation. Because you have immense data locality, a lot of computations can be parallelized much more efficiently if they involve large sequences of trivially parallelizable tasks that are short-lived.
Interesting that you can't simply spin one up on your own; you have to contact AWS to get the process started. Maybe it's simply because of the amount of money you're committing to spend, but I find the possibility that they're now offering an instance type that requires them to physically provision it for you intriguing.
If you're at the level of spend where paying almost a million dollars for an instance, contacting them isn't too hard. You usually have the email address and phone number of a bunch of people who can get it done in a minute or two. And they don't usually provision it -- they just flip a bit to allow you to use the API to provision it yourself.
I think they want the phone call in case you want 10 of them.
Sadly, it just used 6TB as a fixed cutoff, so keeping it up-to-date would be a manual task.
Today, only a few years later, that number is 24TB. (Even more sadly, that's for the last generation of Intel CPU, whereas the current 8S generation tops out at 12TB and isn't even scheduled to have models that would get to 24TB until next year).
I would like to see if you can build such a monster yourself, and how much would it cost? Did Amazon designed their own motherboards or used some off the shelf boards?
The processors they're using appear to be well in excess of $10,000 each. To get 224 physical cores, you'd need to run them in a 8-CPU configuration, so you've got ~$100,000 invested just in the processors alone.
Memory is absurdly expensive right now, and I'd be shocked if 12TB of it cost any less than another $100,000.
You have to sign a multi-year reservation, and with a cost of $267,512.88 per year (as calculated from another comment in the thread), I assume their profit margins for the first year are nearly non-existent. However, over the course of the remaining two years of your reservation, they're making a great deal of money on each reservation.
So yea, I imagine someone could build something like this, but such an individual would need to have very deep pockets.
> cost of $267,512.88 per year (as calculated from another comment in the thread), I assume their profit margins for the first year are nearly non-existent
NB that this was calculated based on the "All Up Front" pricing, so, for that situation, it's safer to say that their profit for the first year is $300k+ [1] and merely non-existent or negative for the second and third years.
12TB is only about $7000 in chips. I buy them for my own products. There’s a lot of ancillary things you need but raw dimms aren’t anywhere near that experience.
I would suspect they are building their own HW for this, but this is speculation based on the fact that AWS is building DCs for the IC/Gov and as such are working to accommodate where that is heading, and addressing how the ICs/Gov would want control over such HW.
Requires a 3-year reservation, and "the effective hourly rate for the All Upfront 3-Year Reservation for a u-12tb1.metal Dedicated Host in the US East (N. Virginia) Region is $30.539 per hour."
Works out to $267,512.88 per year, or $802,538.64 for the 3-year term. I wonder how that compares to building your own on-premise host with that much RAM (obviously, there's operational costs to consider as well).
Also, don't miss the last line: "We’re not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019."
This was discussed a couple days ago in a different thread [1], when 4TB was the EC2 limit.
$400k for the current-gen (224 cores) or, as a sibling comment [2] notes, $250k for the previous CPU generation (which can use twice as many DIMMs of half the density and only 192 cores of that).
> obviously, there's operational costs to consider as well
We can estimate an upper bound on this, given that the current-gen system has N+2 5x1600W 96% efficient PSUs, so 5kW max. If you're paying a colo $.50/kWh, that's another $66k over 3 years worst case.
Realistically, though, CPUs with a max TDP of 1640W, DIMMs (generously) 700W, leaves plenty of room for fans, SSDs and other overhead before getting to 3kW or $40k, and that's still assuming running full-bore the whole time.
There are obviously also ancillary costs to AWS, such as data transfer and EBS.
If you're already running your own hardware, it sure does look more attractive to pay $400k now and $40k over 3 years than to pay AWS $800k All Up Front (now). If not, perhaps it's more attractive to hand-wave away that $360k as saving the "hassle" of hiring someone who knows how to run equipment in a datacenter.
[1] https://news.ycombinator.com/item?id=18041486
[2] https://news.ycombinator.com/item?id=18089267
That's a pretty non-trivial amount of expense to setup a network with enough capacity to support a machine like this fi you want to do it yourself.
[1] https://www.thinkmate.com/system/superserver-7088b-tr4ft
>> SAP HANA in Minutes
> The EC2 High Memory instances are certified by SAP for ⟨…⟩
[SAP HANA keeps everything in RAM.]
Deleted Comment
I don't have any experience with it but it's the one single example that's always used in every vendor post about high-memory machines.
I think they want the phone call in case you want 10 of them.
[0]: https://news.ycombinator.com/item?id=9581862
Sadly, it just used 6TB as a fixed cutoff, so keeping it up-to-date would be a manual task.
Today, only a few years later, that number is 24TB. (Even more sadly, that's for the last generation of Intel CPU, whereas the current 8S generation tops out at 12TB and isn't even scheduled to have models that would get to 24TB until next year).
Maybe their data was in memory, and the machine powered off.
Memory is absurdly expensive right now, and I'd be shocked if 12TB of it cost any less than another $100,000.
You have to sign a multi-year reservation, and with a cost of $267,512.88 per year (as calculated from another comment in the thread), I assume their profit margins for the first year are nearly non-existent. However, over the course of the remaining two years of your reservation, they're making a great deal of money on each reservation.
So yea, I imagine someone could build something like this, but such an individual would need to have very deep pockets.
NB that this was calculated based on the "All Up Front" pricing, so, for that situation, it's safer to say that their profit for the first year is $300k+ [1] and merely non-existent or negative for the second and third years.
[1] Assuming $400k purchase cost as per https://news.ycombinator.com/item?id=18090058
Deleted Comment