Rent Cloud GPUs Instantly

RTX 4090 to H200. Per-hour pricing. Buy token packs and rent. Built for humans and AI agents via HTTP 402.

12 GPU types
From $0.34/hr
402 MPP ready
Instant deploy
All GPUs
🐽 Piglet Value
🐖 Hog Pro
🐗 Mega Hog Max
🐽 Piglet
Best Value
RTX 3090
24GB VRAM
Budget 24GB option for inference workloads
34 tokens/hr $0.34/hr
🐽 Piglet
RTX 4090
24GB VRAM
Best value for AI inference, image gen, fine-tuning
49 tokens/hr $0.49/hr
🐖 Hog
RTX A6000
48GB VRAM
48GB pro GPU for large models and batch processing
49 tokens/hr $0.49/hr
🐖 Hog
L40
48GB VRAM
Data center 48GB GPU for heavy inference
99 tokens/hr $0.99/hr
🐽 Piglet
RTX 5090
32GB VRAM
Newest Blackwell gaming GPU, 32GB VRAM
99 tokens/hr $0.99/hr
🐖 Hog
L40S
48GB VRAM
Latest 48GB data center GPU with fast FP8
109 tokens/hr $1.09/hr
🐗 Mega Hog
Pro Pick
A100 80GB
80GB VRAM
80GB enterprise GPU for training and large model inference
169 tokens/hr $1.69/hr
🐗 Mega Hog
A100 SXM 80GB
80GB VRAM
Fastest A100 variant with SXM interconnect
189 tokens/hr $1.89/hr
🐗 Mega Hog
H100 PCIe
80GB VRAM
Latest Hopper GPU for maximum training speed
269 tokens/hr $2.69/hr
🐗 Mega Hog
H100 SXM
80GB VRAM
Top-tier H100 with NVLink for multi-GPU scaling
349 tokens/hr $3.49/hr
🐗 Mega Hog
H200 SXM
141GB VRAM
141GB next-gen Hopper for massive models
449 tokens/hr $4.49/hr
🐗 Mega Hog
B200
180GB VRAM
180GB Blackwell for frontier model training
649 tokens/hr $6.49/hr
/hour

Best For

About This GPU

Specs

Built for Agents — MPP

MPP HTTP 402 Machine Payable Protocol. Your bot can rent a GPU in 3 API calls.

# 1. List available GPUs
curl https://gpuhog.com/api/gpus

# 2. Request a GPU (returns 402 with payment URL)
curl -X POST https://gpuhog.com/api/rent \
  -H "Content-Type: application/json" \
  -d '{"gpu_id":"RTX 4090","hours":2}'

# Response: 402 Payment Required
# { "status": 402, "checkout_url": "...", "total_usd": "0.98" }

# 3. After payment, pod deploys automatically
curl https://gpuhog.com/api/order/ord_abc123
Setup Guide