Docker Deployment
Running Denkflow in Docker requires careful handling of the data directory for persistence, especially with OneTimeLicenseSource
or TensorRT caching.
- Python
Sample Dockerfile
(Jetson Orin)
FROM nvcr.io/nvidia/l4t-tensorrt:r10.3.0-runtime
RUN apt-get update && apt-get install -y wget
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& rm cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get install -y libcudnn9-cuda-12 \
&& rm -rf /var/lib/apt/lists/*
RUN pip install denkflow-jetson[gpu]==0.6.4 --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple
WORKDIR /app
Sample Dockerfile
(Jetson Xavier with CUDA Compatibility Layer)
For Jetson Xavier devices, which require the CUDA compatibility layer due to driver incompatibilities:
FROM ubuntu:22.04
RUN apt-get update \
&& apt-get install --no-install-recommends -y ca-certificates wget
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& rm cuda-keyring_1.1-1_all.deb
# Download cuda-compat and fix dependencies (are mounted from container runtime)
RUN apt-get update \
&& apt-get download cuda-compat-12-2 \
&& dpkg-deb -R cuda-compat-12-2_*.deb unpacked \
&& sed -i '/^Pre-Depends: nvidia-l4t-core/d' unpacked/DEBIAN/control \
&& dpkg-deb -b unpacked cuda-compat-12-2-modified.deb \
&& apt-get install -y ./cuda-compat-12-2-modified.deb \
&& apt-get install -y cuda-toolkit-12-2 libcudnn9-cuda-12 \
&& rm -rf /var/lib/apt/lists/* \
&& rm cuda-compat* \
&& rm -rf unpacked
RUN pip install denkflow-jetson[gpu]==0.6.4 --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple
ENV LD_LIBRARY_PATH=/usr/local/cuda-12/compat:$LD_LIBRARY_PATH
WORKDIR /app
Note: Jetson Xavier devices have limited execution provider support:
- ✅ CPU execution provider
- ✅ CUDA execution provider
- ❌ TensorRT execution provider (compute capability sm_72 < required sm_75)
Build the Docker image
docker build -t denkflow:latest .
Run the container
Mount a host directory to the container and set DENKFLOW_DATA_DIRECTORY
to point to a path inside the mounted volume. Pass the PAT securely.
# Create a directory on the host for persistent data
mkdir -p ./denkflow_persistent_data
# Run the container
docker run --runtime nvidia -it -v \
-v ./denkflow_persistent_data:/app/persistent_data \ # Mount host dir
-e DENKFLOW_DATA_DIRECTORY=/app/persistent_data \ # Set env var inside container
denkflow:latest
This setup ensures that license state (OneTimeLicenseSource
) and caches (TensorRT) are saved in ./denkflow_persistent_data
on the host machine and persist across container restarts.