docker build; runtime GPU queries in RUN will fail or return misleading results.
| Property | Value |
|---|---|
| Severity | Error |
| Category | Correctness |
| Default | Enabled |
| Auto-fix | No |
Description
DetectsRUN instructions that query GPU hardware at build time. GPU devices are not attached during a normal
docker build, so commands like nvidia-smi and runtime framework checks like torch.cuda.is_available() will
either fail outright or return misleading results (e.g., zero devices, False).
This rule does not fire on CMD or ENTRYPOINT, where GPU queries are expected to run at container startup.
Why this matters
- Build failures —
nvidia-smiwill exit non-zero when no GPU is present, breaking the build - Silent wrong results —
torch.cuda.is_available()returnsFalseat build time, which can cause downstream logic to skip GPU code paths or produce incorrect configuration - Misleading smoke tests — a passing build does not mean GPU support works; the check must happen at runtime
- Official guidance — Hugging Face explicitly warns that GPU hardware is not available during
docker build
Detected patterns
GPU query commands
| Command | Description |
|---|---|
nvidia-smi | NVIDIA System Management Interface (queries GPU hardware) |
nvidia-debugdump | NVIDIA debug information dump |
nvidia-persistenced | NVIDIA persistence daemon (requires GPU) |
Python/ML framework runtime checks
| Pattern | Description |
|---|---|
torch.cuda.is_available() | PyTorch CUDA availability check |
torch.cuda.device_count() | PyTorch CUDA device count |
torch.cuda.get_device_name() | PyTorch CUDA device name query |
torch.cuda.current_device() | PyTorch current CUDA device |
tf.test.is_gpu_available() | TensorFlow GPU availability check |
tf.config.list_physical_devices() | TensorFlow physical device listing |
Examples
Violation
No violation
Configuration
This rule has no rule-specific options.References
- Hugging Face Docker Spaces GPU docs — explicitly warns GPU is unavailable during build
- NVIDIA Container Toolkit