Skip to main content
Version: 0.7.x [Latest Beta]

Docker Deployment

Running the DENKflow API in a Docker container requires careful handling of the data directory for persistence, especially with OneTimeLicenseSource or TensorRT caching. The volumes that need to be mounted are listed in the Configuration Section.

Sample Dockerfile (Jetson Orin)

FROM nvcr.io/nvidia/l4t-tensorrt:r10.3.0-runtime

RUN apt-get update && apt-get install -y wget

RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& rm cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get install -y libcudnn9-cuda-12 \
&& rm -rf /var/lib/apt/lists/*

RUN pip install denkflow[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple

WORKDIR /app

Sample Dockerfile (Jetson Xavier with CUDA Compatibility Layer)

For Jetson Xavier devices, which require the CUDA compatibility layer due to driver incompatibilities:

FROM ubuntu:22.04

RUN apt-get update \
&& apt-get install --no-install-recommends -y ca-certificates wget

RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& rm cuda-keyring_1.1-1_all.deb

# Download cuda-compat and fix dependencies (are mounted from container runtime)
RUN apt-get update \
&& apt-get download cuda-compat-12-2 \
&& dpkg-deb -R cuda-compat-12-2_*.deb unpacked \
&& sed -i '/^Pre-Depends: nvidia-l4t-core/d' unpacked/DEBIAN/control \
&& dpkg-deb -b unpacked cuda-compat-12-2-modified.deb \
&& apt-get install -y ./cuda-compat-12-2-modified.deb \
&& apt-get install -y cuda-toolkit-12-2 libcudnn9-cuda-12 \
&& rm -rf /var/lib/apt/lists/* \
&& rm cuda-compat* \
&& rm -rf unpacked

RUN pip install denkflow[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple

ENV LD_LIBRARY_PATH=/usr/local/cuda-12/compat:$LD_LIBRARY_PATH

WORKDIR /app

Note: Jetson Xavier devices have limited execution provider support:

  • ✅ CPU execution provider
  • ✅ CUDA execution provider
  • ❌ TensorRT execution provider (compute capability sm_72 < required sm_75)

Build the Docker image

docker build -t denkflow:latest .

Run the Container

To properly identify the machine and to store persistent data, some volumes need to be mounted when running the DENKflow API inside of a docker container. /etc/machine-id must be passed through to properly identify the machine for licensing. The placeholder DIRECTORY_ON_PARENT_SYSTEM stands for a path on your local system, outside of the docker container, to store the persistent data.

docker run --runtime nvidia -it -v \
-v /etc/machine-id:/etc/machine-id \ # mount machine-id file
-v DIRECTORY_ON_PARENT_SYSTEM:~/.config/denkflow \ # mount a directory for persistent data storage
denkflow:latest

This setup ensures that the license state (relevant when using the OneTimeLicenseSource) and TensorRT caches are saved in DIRECTORY_ON_PARENT_SYSTEM on the host machine and persist across container restarts.