Docker Compose Tip #50: GPU support with deploy.resources
Running ML models, video processing, or any GPU-accelerated workload? Compose lets you reserve GPU devices for specific services. Basic GPU access Give a service access to all available GPUs: services: ml-training: image: pytorch/pytorch deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] Limiting GPU count Reserve a specific number of GPUs instead of all: services: inference: image: mymodel:latest deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] Selecting specific GPUs by ID Target specific GPU devices when you have multiple: ...