Prerequisites: NVIDIA GPU(s), Docker with the NVIDIA Container ToolkitOn GPU cloud platforms like Vast.ai and RunPod, you just need to create a custom template with the image ghcr.io/eduardoslonski/telescope:latest — they handle the rest. On Lambda, CoreWeave, and similar VM-based platforms, Docker comes preinstalled so you can pull and run the image directly.
Pull the image
docker pull ghcr.io/eduardoslonski/telescope:latest
Start the container
docker run --rm --gpus all --ipc=host --shm-size=16g --network=host \
--ulimit memlock=-1 --ulimit stack=67108864 --ulimit nofile=65536:65536 \
-it ghcr.io/eduardoslonski/telescope:latest /bin/bash
--ipc=host and --shm-size=16g are required for NCCL shared memory across GPUs. --ulimit memlock=-1 unlocks GPU memory pinning for efficient transfers. Platforms like Vast.ai and RunPod handle these flags automatically when using their template system.
The Telescope source code is located at /root/telescope inside the container.Set up Weights & Biases
Telescope logs training data to Weights & Biases, which the UI Visualization uses to display metrics and rollouts. Log in before starting training:Run training
Inside the container, run training with any of the example configs:uv run train.py --config configs/examples/example_countdown.yaml
Prerequisites: NVIDIA GPU(s), Python 3.11+, uvInstall uv (if not already installed):curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/eduardoslonski/telescope.git
cd telescope
uv venv --python 3.11
source .venv/bin/activate
uv sync
Set up Weights & Biases
Telescope logs training data to Weights & Biases, which the UI Visualization uses to display metrics and rollouts. Log in before starting training:Run training
Run training with any of the example configs:uv run train.py --config configs/examples/example_countdown.yaml
Installing from source does not include some performance libraries (Transformer Engine, NVIDIA Apex) that are pre-built in the Docker image and require complex compilation from source. Training will still work fine, just without the accelerated performance those provide.