Quick Start
Get up and running with EagleServers in under 5 minutes.
i. Install the CLI
npm install -g @eagleservers/cli
# or
curl -sSL https://get.eagleservers.com | bash
ii. Authenticate
eagleservers auth login
# Enter your API key when prompted
iii. Deploy Your First Instance
eagleservers deploy --name my-first-app \
--type gpu-rtx4090 \
--region us-west-1 \
--image ubuntu:22.04
# Pro Tip: use "eagleservers deploy --help" to see all options.
Installation
Choose your preferred installation method:
Node.js
// Install SDK
npm install @eagleservers/sdk
// Usage
const EagleServers = require('@eagleservers/sdk');
const client = new EagleServers({
apiKey: 'your-api-key'
});
// Create instance
const instance = await client.instances.create({
name: 'gpu-worker',
type: 'gpu-rtx4090',
region: 'us-west-1'
});
Authentication
All API requests require authentication using your API key.
Getting Your API Key
- Sign in to your dashboard
- Go to Settings → API Keys
- Click Generate New Key
- Store your key securely
Using Your API Key
# Environment variable
export EAGLESERVERS_API_KEY="your-api-key"
# CLI flag
eagleservers --api-key="your-api-key" instances list
# Config file
echo "api_key: your-api-key" > ~/.eagleservers/config.yaml
Security Note: Never commit your API keys to version control.
GPU Computing Guide
Optimize your GPU workloads for maximum performance.
Available GPU Types
| GPU Model |
VRAM |
CUDA Cores |
Best For |
Price/hr |
| RTX 4090 |
24GB |
16,384 |
Large models, training |
$1.79 |
| A100 |
80GB |
6,912 |
Enterprise ML |
$2.79 |
| T4 |
16GB |
2,560 |
Inference |
$0.55 |
| V100 |
32GB |
5,120 |
Research |
$1.29 |
CUDA Setup
import torch
import eagleservers
print(f"CUDA Available: {torch.cuda.is_available()}")
print(f"GPU Count: {torch.cuda.device_count()}")
model = YourModel().cuda()
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
Base URL
https://api.eagleservers.com/v1
Authentication Header
Authorization: Bearer YOUR_API_KEY
Instances API
GET /instances — List all instances
{
"instances": [
{
"id": "i-1234567890",
"name": "gpu-worker-1",
"type": "gpu-rtx4090",
"status": "running",
"ip": "192.168.1.100",
"created_at": "2024-01-15T10:30:00Z"
}
],
"total": 1
}
POST /instances — Create new instance
{
"name": "my-gpu-instance",
"type": "gpu-rtx4090",
"region": "us-west-1",
"image": "ubuntu:22.04",
"ssh_keys": ["ssh-rsa AAAAB3..."]
}
Response
{
"instance": {
"id": "i-9876543210",
"name": "my-gpu-instance",
"type": "gpu-rtx4090",
"status": "provisioning",
"created_at": "2024-01-15T11:00:00Z"
}
}
DELETE /instances/{id} — Delete instance
{
"success": true,
"message": "Instance deleted successfully"
}