I’m Jonathan from TensorDock. After 7 months in beta, we’re finally launching Core Cloud, our platform to deploy GPU virtual machines in as little as 45 seconds! https://www.tensordock.com/product-core
Why? Training machine learning workloads at large clouds can be extremely expensive. This left us wondering, “how did cloud ever become more expensive than on-prem?” I’ve seen too many ML startups buy their own hardware. Cheaper dedicated servers with NVIDIA GPUs are not too hard to find, but they lack the functionality and scalability of the big clouds.
We thought to ourselves, what if we built a platform that combines the functionality of the large clouds but made it priced somewhere between a dedicated server and the large clouds? That’s exactly what we’ve done.
Built to make engineers more productive. We have 3 machine learning images so you can start training ML models in 2 minutes, not 2 hours. We provide a REST API, so you can integrate directly your code with ours. And, there’s a community CLI you can use to manage your servers directly via command line
We provide a feature set only large clouds supersede. We have storage-only billing when the VM is stopped (for only $0.073/GB/month) so that you aren't paying for compute when you don't need it. We also provide the ability to edit virtual machines after they’re created to downsize costs. If you provision a NVIDIA A6000 and find out you’re only using 50% of it, stop the VM, modify it to a NVIDIA A5000, and you’ll be billed the lower hourly rate without needing to recreate your server and migrate data over! Our infrastructure is built on 3x-replicated NVMe-based network storage, 10 Gbps networking, and 3 locations (New York, Chicago, Las Vegas) with more coming soon!
- CPU-only servers from $0.027/hour
- NVIDIA Quadro RTX 4000s from $0.29/hour
- NVIDIA Tesla V100s from $0.52/hour
- and 8 other GPU types that let you truly right-size workloads so that you’re never paying for more than you actually need
We're starting off with $1 in free credits! Yes, we sound cheap… but $1 is all you need to get started with us! That’s more than 3 hours of compute time on our cheapest configuration! Use code HACKERNEWS_1 on the billing page to redeem this free credit :)TensorDock: https://www.tensordock.com/ Product page: https://www.tensordock.com/product-core API: https://documenter.getpostman.com/view/10732984/UVC3j7Kz Community CLI: https://github.com/caguiclajmg/tensordock-cli
Deploy a GPU: https://console.tensordock.com/deploy
I'm here to answer your questions, so post them below! Or, email me directly at jonathan@tensordock.com :)
[A100 PCI]
Lambda Labs: $1.10/hr
TensorDock: $2.06/hr
Coreweave: $2.46/hr
Paperspace: $3.09/hr
[A100 SXM]
Lambda Labs: $1.25/hr
TensorDock: $2.06/hr
Coreweave: N/A (I think PCI only)
Paperspace: N/A (I think PCI only)
[A40]
TensorDock: $1.28/hr
Coreweave: $1.68/hr
Paperspace: N/A
Lambda Labs: N/A
[A6000]
Lambda Labs: $0.80/hr
Tensordock: $1.28/hr
Paperspace: $1.89/hr
Coreweave: $1.68/hr
[V100 SXM4]
Lambda Labs: $0.55/hr
TensorDock: $0.80/hr
Coreweave: $1.00/hr
Paperspace: $2.30/hr
[A5000]
TensorDock: $0.77/hr
Coreweave: $1.01/hr
Paperspace: $1.38/hr
Lambda Labs: N/A
Jonathan thanks for the post. A question: it sounds like TensorDock partners with 3rd-parties who bought these servers and TensorDock doesn't actually own any of the servers you rent out. If that's the case, how do you ensure security? If not, please ignore.
[References]
https://www.paperspace.com/pricing
https://lambdalabs.com/service/gpu-cloud#pricing
https://coreweave.com/pricing
https://www.tensordock.com/product-core
[1] https://cloudoptimizer.io
Of course, you can set up data checkpointing to save your data, but overall, it is a bit of an extra hassle to run on spot/interruptible instances, and if you do get interrupted, you are wasting valuable time waiting for stock to free up again.
The Alibaba price you cited is for interruptible/spot. For V100 on-demand (uninterruptible) Oracle is least expensive from that list at $1.275/hr.
Vaguely recall reading they're using a mixed model, but OP will hopefully confirm
https://puzl.ee/cloud-kubernetes/configurator
I'm quite happy with them, but storage is limited to 1GBit/s and you should know basic kubectl to use it fully.
Long answer: We own a substantial amount of compute ourselves, as far as Singapore where we have fully-owned hardware at Equinix SG1. We started with our own crypto mining operation just outside of Boston, but as wholesale consumers approached us in 2020 due to pandemic surges, we added business internet and power backups. Suddenly we were operating a rudimentary "office data center." Two large reseller sites sell on our fully-owned hardware. Boston's electric costs are very high ($0.23/kWh), so we're gradually moving more hardware to tier 3/4 data centers that are cheaper on a per-unit basis.
But, we partner with 3rd parties too (4 large scale 1000+ GPU operators each to be exact) to resell their compute. This is also how we'll enter Europe... we're working closely with an existing supplier that colocates servers at Hydro66 in Sweeden and another at Scaleway Paris. We provide the software, they provide the hardware, and we pay them a special rate based on the volume we're doing. Partnering with others is the only way we can handle large scale without insanely high capex costs (that being said, we do get preferential pricing as an NVIDIA Inception Program member, which we take advantage of for our own fully-owned hardware).
We have a doc in it here: https://docs.tensordock.com/infrastructure/reservable-instan...
We're also working on a marketplace (client site: https://www.tensordock.com/product-marketplace, host site: https://www.tensordock.com/host). We expect a beta version to be up and running in the next ~2 weeks. With this, we'll have a script that hosts will use to install a small version of OpenStack. Then, they set prices, and customers can deploy directly on that hardware. By aggregating all these hosts together on the same marketplace, we hope we can slash the price of compute.
So far, owning our own hardware has allowed us to negotiate better rates and enter markets where previous services don't exist (namely, Singapore, where we sell subscription servers with 1070 GeForce cards for $150/month — unheard of pricing for an APAC city). Eventually, we hope there'll be suppliers in every city selling on our marketplace or core cloud product so that we can really become the #1 place for ML startups to provision compute. In a way, we want to be the Amazon of cloud computing. Amazon, in a way, created a global marketplace. Yes, they sell their own products, but they also sell others' products. By doing so, you know that you're getting a good deal on whatever you buy. We want to end up being the same thing for compute, but that's still a few years off :)
TL;DR - we own a lot of hardware, and we resell a lot of hardware. But in the future, we want to focus on the reselling aspect to truly be able to nail the user experience and handle demand surges while maintaining low costs.
Most of my ML training is done using personal data or sensitive documents, and I have not found a cheap provider yet that I can use.
https://lambdalabs.com/service/gpu-cloud
Unless I'm missing something?
I haven't used them, so please fact check me, but it seems like the machines come with directly attached storage. So, if you're using an 8x V100 and want to switch to a 1x RTX 6000, you'd have to spin up a new server and manually migrate your data over.
We built our platform with networked storage. You can spin up a CPU-only instance for $0.027/hour (<$20/month), upload your data, convert it into a GPU instance to train your models, and then convert it back. We frequently see users converting servers from 8x A100s (to train workloads) back to 1x RTX 4000s (to run inference). This kind of flexibility saves people time, which equates to money given how expensive ML developers are now.
(Our networked storage model also enables people to shut off their VMs and save money)
I'm sure Lambda Labs is working on something similar, but it seems they are doing dedicated servers based on how they advertise.
I think we also have a higher variety of GPUs (10 SKUs with us vs 4 SKUs). This lets people switch from between, say, an NVIDIA A6000 to A5000 to A4000 to truly "right-size" their compute so they don't pay for anything they don't need.
Cost-wise, we also have better long-term pricing like GeForce 1070s for $100/month in Boston or $150/month in Singapore Equinix SG1 - which is really good pricing for an APAC city in my opinion (https://console.tensordock.com/order_subscription), and we're working on a marketplace to let compute suppliers list their compute on our platform to get closer to cheapest for those who really care about cost (https://www.tensordock.com/product-marketplace).
Vast has RTX 3090 GPUs for $.32/hr on-demand or $.18/hr for interruptible. You can see live available offers on the website right now.
[References] https://news.ycombinator.com/item?id=30736459https://vast.ai/console/create/
We are actually working on something very similar to vast.ai (https://www.tensordock.com/product-marketplace) set to launch into a soft beta mode within the next two weeks and probably a real "Show HN" by the end of August. We'll have a few dozen GPUs scheduled to come online during the launch week at prices similar to Vast.ai. This would be with with full virtualization, which we think is better than Docker containers because we customers can run Windows VMs and do cloud gaming/rendering, thereby generating hosts more income. We might also add VM disk encryption later on, which would be more secure. Still, they are very large, so it'd be large road uphill, but we're working on something similar.
Also, if I remember correctly, with Vast (as a former user myself), an issue can arise when you have a VM in the stored state but someone is claiming the GPUs running an on-demand workload, which prevents you from being able to pull your data out. Because VMs are all booting from network storage and can be rescheduled to other compute nodes, you won't face that issue on our core cloud product here :)
Some feedback on your landing page: your "Frequently asked billing questions" on the bottom of the pricing page[0] are extremely aggressive sounding even if they are meant to be tongue-in-cheek. For example:
- "I'm a bad customer and want to chargeback" - This doesn't seem like a very common billing question so it seems out of place here.
- "So, don't chargeback or dispute a charge from us, ever" - This is a very aggressive ultimatum that sounds almost threatening. Are you so risk averse that you need to threaten potential customers?
- "We cannot afford to provide services where there is a risk that we could not be paid" - This is an axiom for any business model. Why state it here?
- " Money-back guarantee? Sorry, no. Instead, start off with $5 and scale from there. If you are displeased, we can close your account and refund the $5, nothing more." - This is a fair approach, but you also advertise a $10k/month tier that would receive special treatment. As a business owner, this type of attitude makes me reconsider any kind of partnership when comparing similar services. For example, AWS is well-known for their billing forgiveness in the event of a mistake or other reasonable situations.
- "Credit cards that do not support 3DS are automatically assigned a higher fraud score and are at a higher risk of being placed in a 'fraud check' mode." - It's smart as a startup to try and counteract the risk of fraud in your revenue stream. However, this typically happens in the background and is generally assumed/hoped for by someone typing their credit card number into a website for payment. Combined with your aggressive stance on displeased customers, this smacks of the issues caused by automated punitive measures implemented by YouTube, Twitter, GMail, etc. that often make the front page of Hacker News.
Although this copy isn't necessarily front and center on your landing page, I think TensorDock would benefit from some time spent editing and adding questions that are more likely to be asked, such as, "what payment methods do you support?"
[0]: https://www.tensordock.com/pricing
We offer Quadro RTX 4000s (which perform up to 2x faster than Tesla T4s) with 8GB VRAM starting at 0.29 cents an hour which are much cheaper than what AWS and Oracle offer for the 16GB GPUs (Tesla T4s. P100s, and V100s). We also have RTX 5000s and A4000s with 16GB of VRAM which will be slightly more expensive than AWS and Oracle, but they have much better performance which will lower the time to train a new machine learning model, thereby lowering the cost. If you're looking for long-term inference worklords, we have subscription servers starting at $100 a month.
https://console.tensordock.com/order_subscription
Feel free to ask anymore questions by emailing me ryan@tensordock.com
Is it due to difficulties of containerizing gpus or is the api space of k8s too big and difficult for a small cloud provider to implement?
Maybe the workloads completely don’t make sense together. I’m just curious what others think about it.
Deleted Comment