Enterprise-ready. Tried and tested API. 1k+ GPUs. Easy to use. Servers in 45 seconds, not 7 minutes.
Pricing that encourages scale. Per-minute billing. Modify servers to right-size
compute.
Start with $5 and launch a server in 45 seconds.
When VMs are off, you are only billed for storage.
Save by using TensorDock — up to 70%.
Customize resource allocation to optimize costs.
Need compute for a month or longer? Save an additional
up to 50% by getting a Subscription Server
Our best most affordable GPU, with up to 2x the performance of an NVIDIA Tesla T4.
$0.29/hr: 2 CPU, 4 GB RAM $0.33/hr: 4 CPU, 8 GB RAM $0.41/hr: 8 CPU, 16 GB RAMThe standard for machine learning with excellent FP64 performance.
$0.52/hr: 2 CPU, 4 GB RAM $0.60/hr: 4 CPU, 16 GB RAM $0.88/hr: 8 CPU, 64 GB RAM10 GPU models available — choose the one that is just right for your workload.
Customize a server
Perfect for running a test workload or storing some data at our data centers.
$0.027/hr: 1 CPU, 4 GB RAM, 20GB NVMe SSD
$0.060/hr: 2 CPU, 8 GB RAM, 100GB NVMe SSD
Right-size compute. Try each of our 10+ GPU options to see which works best with your workload. Never pay
more than you actually need.
You can deploy a CPU-only server, upload your data, convert it to a GPU server to run an ML workload, and
then convert it back to a CPU-only server.
When VMs are off, you are billed a very low rate for storage. For continuous workloads, we
offer monthly or longer servers at even
lower pricing.
Deploy our machine learning image and get Jupyter Notebook/Lab out of the box.
Once you switch, you'll never look back.
You can get a V100 server from us for $0.52/hour — that's cheaper than any other cloud
Well-documented and well-maintained so that you can start automating deployments.
Manage your servers via the command line using this command-line interface.
GPU stock numbers available in realtime.
3 fleets of partner GPUs on request.
Instantly-deployable VMs in 3 US-based locations, or order a long-term subscription in 11 global locations.
Eliminate your #1 cost at other clouds. Just contact us if you anticipate needing 100TB+ of monthly traffic.
![]()
airgpu uses TensorDock's API to deploy Windows virtual machines for cloud gamers. TensorDock's abundant GPU stock enables airgpu to scale during weekend peaks without worrying about compute availability.
![]()
ELBO uses TensorDock's reliable and secure GPU cloud to create generate art. TensorDock's highly-performant servers run their workloads faster on the same GPU types than the big clouds.
![]()
Researcher Skyler Liang from Florida State University uses TensorDock's A40s and A6000s to work with GAN networks. TensorDock's low pricing enables FSU researchers to do more with their limited university compute budgets.
![]()
Creavite uses TensorDock's Windows GPU servers with Adobe software to render logo animations as well as CPU-only servers. The presence of both types of servers enables Creavite to tightly integrate their workflows.
Our goal is enable you — whether you are a sole developer, funded startup, or enterprise — to compute groundbreaking innovations easier at industry-leading prices. Learn about our story.
We're always receptive to feedback, just contact us!
Endpoints
In Stock
Reservable
You can deploy instant servers located in New York, Chicago, Las Vegas and order long-term servers from 11 data centers. Learn more.
To keep prices low, we can't afford to give away free credits to everyone. If you anticipate a scale of >$5k/month, schedule a call with sales.
Full KVM virtualization with root access and a dedicated GPU passed through. You get to use the full compute power of your GPU without resource contention.
We operate on a pre-paid model: you deposit money and then provision a server. Once your balance nears $0, the server is automatically deleted. We also offer subscription servers where we automatically charge your credit card every term for servers that are meant to run long-term.
Go ahead — go build the future of tomorrow — on TensorDock. Cloud-based machine learning and rendering has never been easier and cheaper.
Deploy a GPU Server