Skip to main content
Version: 0.6.x

Installation Guide

This guide will help you set up the DENKflow API on your system. The DENKflow API is optimized for AI inferences on GPUs and other AI hardware acceleration modules, but inferences can also be run on a CPU.

System Requirements

Minimum Requirements

  • Operating System: Windows 10, Windows 11, Linux (with glibc ≥ 2.31 and libstdc++ ≥ 12)
  • Architecture: x86-64 or ARM64
  • Storage: At least 2 GB of free disk space for models and cache
  • CPU: At least 4 cores for optimal performance
  • RAM: 4 GB minimum, 16 GB and upward are recommended for larger models
  • Optional GPU: NVIDIA GPU or Jetson Device with at least 4 GB of VRAM

Standard Installation

Prerequisites

  • Python 3.10, 3.11, 3.12 or 3.13
  • ONNX-Runtime for GPU evaluation on Windows:
    • pip install onnxruntime-gpu==1.22.0 for CUDA
    • pip install onnxruntime-directml==1.22.0 for DirectML
  • On linux the required onnxruntime packages will be automatically installed when installing the DENKflow package

Installation

For a basic installation that will run on a CPU:

pip install denkflow --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple

Hardware Acceleration Options

The DENKflow API supports various hardware acceleration methods to significantly improve inference speeds. Choose the option that matches your hardware:

GPU Acceleration (NVIDIA GPUs)

For accelerated processing on NVIDIA hardware:

pip install denkflow[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple

Requirements:

  • CUDA Toolkit version 12.x
  • Compatible NVIDIA GPU or NVIDIA Jetson device
  • Up-to-date NVIDIA drivers

Notes:

  • Includes both CUDA and TensorRT execution providers
  • First-time TensorRT initialization takes ~15 minutes to build the engine cache
  • Subsequent runs will be significantly faster
  • The cache is stored in the DENKFLOW_DATA_DIRECTORY (see Configuration section)

Jetson Devices

For NVIDIA Jetson devices, use the specialized Jetson package:

pip install denkflow-jetson[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple

Important Notes for Jetson Users:

  • Always use denkflow-jetson package (not denkflow) for Jetson devices
  • Jetson Xavier limitations:
    • TensorRT execution provider is NOT supported due to outdated compute capability (sm_72, minimum required: sm_75)
    • Only CPU and CUDA execution providers will work
  • Jetson Orin: Full TensorRT support available

Jetson Xavier: Potential Import Crash Fix

If you encounter a crash when importing denkflow on Jetson Xavier with the error:

/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = unsigned int; _Alloc = std::allocator<unsigned int>; reference = unsigned int&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.
Aborted (core dumped)

Enable all CPU cores manually:

sudo su
echo 1 > /sys/devices/system/cpu/cpu4/online
echo 1 > /sys/devices/system/cpu/cpu5/online

Troubleshooting

If you encounter installation issues:

  1. Ensure your Python version meets requirements: python --version
  2. Check your system libraries: ldd --version
  3. For GPU acceleration, verify CUDA installation: nvcc --version
  4. See our Troubleshooting Guide for common solutions

Next Steps

After a successful installation, check out the Core Concepts to begin using the DENKflow API.