We offer Quadro RTX 4000s (which perform up to 2x faster than Tesla T4s) with 8GB VRAM starting at 0.29 cents an hour which are much cheaper than what AWS and Oracle offer for the 16GB GPUs (Tesla T4s. P100s, and V100s). We also have RTX 5000s and A4000s with 16GB of VRAM which will be slightly more expensive than AWS and Oracle, but they have much better performance which will lower the time to train a new machine learning model, thereby lowering the cost. If you're looking for long-term inference worklords, we have subscription servers starting at $100 a month.
https://console.tensordock.com/order_subscription
Feel free to ask anymore questions by emailing me ryan@tensordock.com
Dead Comment
Ryan @ TensorDock