Documentation

Everything you need to build with eagleservers

Quick Start

Get up and running with EagleServers in under 5 minutes.

i. Install the CLI

npm install -g @eagleservers/cli
# or
curl -sSL https://get.eagleservers.com | bash

ii. Authenticate

eagleservers auth login
# Enter your API key when prompted

iii. Deploy Your First Instance

eagleservers deploy --name my-first-app \
--type gpu-rtx4090 \
--region us-west-1 \
--image ubuntu:22.04

Pro Tip: Use eagleservers deploy --help to see all available options.

Installation

Choose your preferred installation method:

Node.js — Python — Docker

// Install SDK
npm install @eagleservers/sdk

// Usage
const EagleServers = require('@eagleservers/sdk');
const client = new EagleServers({
  apiKey: 'your-api-key'
});

// Create instance
const instance = await client.instances.create({
  name: 'gpu-worker',
  type: 'gpu-rtx4090',
  region: 'us-west-1'
});

Authentication

All API requests require authentication using your API key.

i. Getting Your API Key

  • Sign in to your dashboard
  • Navigate to Settings → API Keys
  • Click "Generate New Key"
  • Store your key securely

ii. Using Your API Key

# Via environment variable
export EAGLESERVERS_API_KEY="your-api-key"

# Via CLI flag
eagleservers --api-key="your-api-key" instances list

# Via config file
echo "api_key: your-api-key" > ~/.eagleservers/config.yaml

GPU Computing Guide

Optimize your GPU workloads for maximum performance.

Available GPU Types

GPU Model VRAM CUDA Cores Best For Price/hr
RTX 4090 24GB 16,384 Large models, training $1.79
A100 80GB 6,912 Enterprise ML $2.79
T4 16GB 2,560 Inference $0.55
V100 32GB 5,120 Research $1.29

CUDA Setup

import torch
import eagleservers

# Check CUDA availability
print(f"CUDA Available: {torch.cuda.is_available()}")
print(f"GPU Count: {torch.cuda.device_count()}")

# Initialize model on GPU
model = YourModel().cuda()

# Distributed training
if torch.cuda.device_count() > 1:
  model = nn.DataParallel(model)