Using CUDA in Docker Images¶
Compute Resources
Have questions or need help with compute, including activation or issues? Follow this link.
Docker Usage
The information on this page assumes that you have a knowledge base of using Docker to create images and push them to a repository for use. If you need to review that information, please see the links below.
Docker Basics: Building, Tagging, & Pushing A Custom Docker Image
Overview¶
This documentation will guide you on making sure you’re using the most appropriate CUDA version for your Docker image in regards to the Scientific Compute Platform.
Using the Correct Version¶
Examples of appropriate base images:
nvidia/cuda:12.4.1-base-ubuntu22.04
nvidia/cuda:12.4.1-runtime-ubuntu22.04
ghcr.io/washu-it-ris/novnc:ubuntu22.04_cuda12.4_runtime
ghcr.io/washu-it-ris/novnc:ubuntu22.04_cuda12.4_devel
Nvidia has a lot of base images to develop from and can be found here: https://hub.docker.com/r/nvidia/cuda/tags
Testing Your Image¶
Shown below are the steps to run a test job.
- Start up an interactive job with your Docker image.
There is a test script in https://github.com/WashU-IT-RIS/docker-osu-micro-benchmarks.git for the OSU GPU test.
Clone the repository.
git clone https://github.com/WashU-IT-RIS/docker-osu-micro-benchmarks.git
Change directory to docker-osu-micro-benchmarks.
cd docker-osu-micro-benchmarks
- Run an OSU Benchmark GPU test.
Replace <test> with an OSU test that you want to run. For example, osu_bw for OSU bandwidth test.
Replace <compute-group> with the compute group you are a member of.
QUEUE=subscription bin/osu-test-gpu.sh <test> -G <compute-group>