Installation Guide
This guide will help you set up the DENKflow API on your system. The DENKflow API is optimized for AI inferences on GPUs and other AI hardware acceleration modules, but inferences can also be run on a CPU.
System Requirements
Minimum Requirements
- Operating System: Windows 10, Windows 11, Linux (with glibc ≥ 2.31 and libstdc++ ≥ 12)
- Architecture: x86-64 or ARM64
- Storage: At least 2 GB of free disk space for models and cache
Recommended Hardware
- CPU: At least 4 cores for optimal performance
- RAM: 4 GB minimum, 16 GB and upward are recommended for larger models
- Optional GPU: NVIDIA GPU or Jetson Device with at least 4 GB of VRAM
Standard Installation
- Python
- C
Prerequisites
- Python 3.10, 3.11, 3.12 or 3.13
- ONNX-Runtime for GPU evaluation on Windows:
pip install onnxruntime-gpu==1.22.0for CUDApip install onnxruntime-directml==1.22.0for DirectML
- On linux the required onnxruntime packages will be automatically installed when installing the DENKflow package
Installation
For a basic installation that will run on a CPU:
pip install denkflow --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple
Hardware Acceleration Options
The DENKflow API supports various hardware acceleration methods to significantly improve inference speeds. Choose the option that matches your hardware:
GPU Acceleration (NVIDIA GPUs)
For accelerated processing on NVIDIA hardware:
pip install denkflow[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple
Requirements:
- CUDA Toolkit version 12.x
- Compatible NVIDIA GPU or NVIDIA Jetson device
- Up-to-date NVIDIA drivers
Notes:
- Includes both CUDA and TensorRT execution providers
- First-time TensorRT initialization takes ~15 minutes to build the engine cache
- Subsequent runs will be significantly faster
- The cache is stored in the
DENKFLOW_DATA_DIRECTORY(see Configuration section)
Jetson Devices
For NVIDIA Jetson devices, use the specialized Jetson package:
pip install denkflow-jetson[gpu] --index-url https://denkflow:gldt-ep8wbqptqrvjTYoTvxTA@gitlab.com/api/v4/projects/69262737/packages/pypi/simple
Important Notes for Jetson Users:
- Always use
denkflow-jetsonpackage (notdenkflow) for Jetson devices - Jetson Xavier limitations:
- TensorRT execution provider is NOT supported due to outdated compute capability (sm_72, minimum required: sm_75)
- Only CPU and CUDA execution providers will work
- Jetson Orin: Full TensorRT support available
Jetson Xavier: Potential Import Crash Fix
If you encounter a crash when importing denkflow on Jetson Xavier with the error:
/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = unsigned int; _Alloc = std::allocator<unsigned int>; reference = unsigned int&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.
Aborted (core dumped)
Enable all CPU cores manually:
sudo su
echo 1 > /sys/devices/system/cpu/cpu4/online
echo 1 > /sys/devices/system/cpu/cpu5/online
Troubleshooting
If you encounter installation issues:
- Ensure your Python version meets requirements:
python --version - Check your system libraries:
ldd --version - For GPU acceleration, verify CUDA installation:
nvcc --version - See our Troubleshooting Guide for common solutions
Prerequisites
- A working C compiler
Relevant Files
The compiled binaries and the header file can be downloaded from this repository. Every release contains:
- denkflow.h: The header file that contains the function definitions
- DENKflow API packages for different platforms:
- Linux packages contain:
- libdenkflow.so: The library file containing the binary code of the DENKflow functions
- Windows packages contain:
- libdenkflow.dll: The library file containing the binary code of the DENKflow functions
- libdenkflow.dll.lib: The file to be linked against during compilation
- Linux packages contain:
- Additional ONNX Runtime dependencies that are needed for GPU evaluation on different platforms
Note for Linux on AArch64 (including Jetson devices): For this platform, the ONNX Runtime package is always required. The environment variable ORT_DYLIB_PATH must be set to the path of the libonnxruntime.so* file. The other files in the ONNX Runtime package are only required for running inferences on the GPU.
Troubleshooting
If you encounter installation issues, ensure that all DLLs/SOs can be found at compilation-time and run-time.
Next Steps
After a successful installation, check out the Core Concepts to begin using the DENKflow API.