Waitlist
Queue for unavailable GPU types and get notified when capacity becomes available.
The Waitlist API lets you reserve your place in line for GPU cluster types that are currently at capacity. When a requested GPU type becomes available, you'll be notified via email and webhook — or you can opt into automatic deployment.
Base URL: https://gpu.huddleapis.com/api/v1
How the Waitlist Works
- You try to deploy a GPU cluster type, but get a
409 Conflict(no capacity) - Create a waitlist request for that cluster type
- System monitors availability as GPU inventory is synced
- You get notified when capacity opens up (email + optional webhook)
- Deploy manually or let auto-deploy handle it for you
Create a Waitlist Request
curl -X POST -H "X-API-Key: mk_your_key" \
-H "Content-Type: application/json" \
-d '{
"cluster_type": "8H200.141S.176V",
"location": "FIN-01"
}' \
https://gpu.huddleapis.com/api/v1/deployments/requestsResponse:
{
"code": "OK",
"data": {
"id": "wr_abc123",
"cluster_type": "8H200.141S.176V",
"location": "FIN-01",
"status": "waiting",
"auto_deploy": false,
"created_at": "2026-04-03T10:00:00Z",
"updated_at": "2026-04-03T10:00:00Z"
}
}Leave location empty to be notified when the GPU type is available in any region.
Auto-Deploy
If you want to automatically deploy when capacity opens up, pass auto_deploy: true along with a full deployment configuration:
curl -X POST -H "X-API-Key: mk_your_key" \
-H "Content-Type: application/json" \
-d '{
"cluster_type": "8H200.141S.176V",
"location": "FIN-01",
"auto_deploy": true,
"config": {
"hostname": "my-training-cluster",
"image": "ubuntu-22.04-cuda-12.4-cluster",
"ssh_key_ids": ["key-id-1"],
"location": "FIN-01"
}
}' \
https://gpu.huddleapis.com/api/v1/deployments/requestsWhen the GPU becomes available, the system will automatically create the deployment using your saved config.
Auto-deploy requires sufficient credits
Make sure you have at least $20 in GPU credits at all times while waiting. If your balance is too low when a GPU opens up, auto-deploy will fail.
Waitlist Statuses
| Status | Description |
|---|---|
waiting | Request is in the queue. System is monitoring for availability. |
notified | GPU capacity opened up. Notification sent via email and webhook. |
deployed | Auto-deploy was triggered and a deployment was created. |
cancelled | Request was cancelled by the user. |
List Your Requests
curl -H "X-API-Key: mk_your_key" \
https://gpu.huddleapis.com/api/v1/deployments/requestsCancel a Request
curl -X DELETE -H "X-API-Key: mk_your_key" \
https://gpu.huddleapis.com/api/v1/deployments/requests/wr_abc123Webhook Integration
Subscribe to the gpu.available event in Webhooks to receive real-time notifications when your waitlisted GPU type opens up. This is useful for triggering custom automation pipelines.
Next Steps
- Learn about Offers to see what's currently available
- Set up Webhooks to get real-time
gpu.availablenotifications - Review Deployments for the full deployment workflow