Denkflow Integration Guide
This software is currently in alpha status. This means that:
- It is suitable for testing environments but caution is advised for production use
- Some bugs may still be present
- The API is stabilizing but minor changes may still occur
- Most core features are implemented and working correctly
The following features are planned for upcoming releases:
- Support for all model types (classification, segmentation, rotated object detection, etc.)
- USB Dongle Support
- C# API for .NET applications
- Public Rust API for high-performance integrations
- Output Modules
- etc.
Overview
Denkflow is DENKweit's solution for a complete end-to-end integration, from training AI models on our Vision AI Hub to executing them in your production environment.
Currently, object detection and ocr are implemented for CPU, NVIDIA GPU and Jetson Orin devices.
Quick Start
- Python
- C
from denkflow import Pipeline, ImageTensor
# Fill these with custom values
model_file = "path/to/model/file.denkflow"
pat = "personal_access_token";
image_file = "path/to/an/image.jpg"
confidence_threshold = 0.9
# Default Values for Object Detection
input_topic = "camera/image";
output_topic = "bounding_box_filter_node/filtered_bounding_boxes";
# --- Read Model File ---
pipeline = Pipeline.from_denkflow(model_file, pat=pat)
# --- Initialization ---
pipeline.initialize()
# --- Subscribe to Outputs ---
receiver = pipeline.subscribe_bounding_box_tensor(output_topic)
# --- Send Image into Pipeline ---
pipeline.publish_image_tensor(input_topic, ImageTensor.from_file(image_file))
# --- Run Pipeline ---
pipeline.run()
# --- Receive and Process Results ---
objects = receiver.receive().to_objects(confidence_threshold)
print(f"\nDetected {len(objects)} objects:")
for obj in objects:
print(f"- Class: {obj.class_label.name}, Confidence: {obj.confidence:.2f}")
print(f" BBox: ({obj.x1}, {obj.y1}), ({obj.x2}, {obj.y2})")
When using the C-API, the objects are provided via opaque structs which are found in the denkflow.h
file. The implementers program will only operate on pointers on these structs. Every pointer needs to be initialized using a nullptr
which tells the denkflow-API that it has not yet been initialized.
After an object is no longer needed, it must be freed to prevent memory leaks. After freeing the objects pointer will be set to a nullptr
. Freeing can be done using:
free_object((void**)&object);
You can list all allocated objects to the console with:
list_objects();
To free all objects at once, you can call:
free_all_objects();
Here is a minimal but complete example of an object detection workflow:
#include <iostream>
#include <vector>
#include "denkflow.h"
// --- Helper Function to Interpret Error Codes ---
void process_return(DenkflowResult return_value, std::string function_name) {
std::string error_message = get_last_error();
std::cout << function_name << " returned " << (int32_t)return_value;
if (error_message.size() > 0) {
std::cout << " [" << error_message << "]";
}
std::cout << std::endl;
}
const char NULL_BYTE [1] = {'\0'};
DenkflowResult r;
HubLicenseSource* hub_license_source = nullptr;
Pipeline* pipeline = nullptr;
InitializedPipeline* initialized_pipeline = nullptr;
ImageTensor* image_tensor = nullptr;
Receiver<BoundingBoxTensor>* receiver = nullptr;
BoundingBoxTensor* bounding_box_tensor = nullptr;
BoundingBoxResults* bounding_box_results = nullptr;
// Fill these with custom values
std::string model_file = "path/to/model/file.denkflow";
std::string pat = "personal_access_token";
std::string image_path = "path/to/an/image.jpg";
float confidence_threshold = 0.9f;
// Default Values for Object Detection
std::string input_topic = "camera/image";
std::string output_topic = "bounding_box_filter_node/filtered_bounding_boxes";
// --- Create License Source ---
r = hub_license_source_from_pat(&hub_license_source, pat.c_str(), NULL_BYTE, NULL_BYTE);
process_return(r, "hub_license_source_from_pat");
// --- Read Model File ---
r = pipeline_from_denkflow(&pipeline, model_file.c_str(), (void**)&hub_license_source);
process_return(r, "pipeline_from_denkflow");
// --- Initialization ---
r = initialize_pipeline(&initialized_pipeline, &pipeline);
process_return(r, "initialize_pipeline");
// --- Subscribe to Outputs ---
r = initialized_pipeline_subscribe_bounding_box_tensor(&receiver, initialized_pipeline, output_topic.c_str());
process_return(r, "initialized_pipeline_subscribe_bounding_box_tensor");
// --- Send Image into Pipeline ---
r = image_tensor_from_file(&image_tensor, image_path.c_str());
process_return(r, "image_tensor_from_file");
r = initialized_pipeline_publish_image_tensor(initialized_pipeline, input_topic.c_str(), &image_tensor);
process_return(r, "initialized_pipeline_publish_image_tensor");
// --- Run Pipeline ---
r = initialized_pipeline_run(initialized_pipeline, 3000);
process_return(r, "initialized_pipeline_run");
// --- Receive and Process Results ---
r = receiver_receive_bounding_box_tensor(&bounding_box_tensor, receiver);
process_return(r, "receiver_receive_bounding_box_tensor");
r = bounding_box_tensor_to_objects(&bounding_box_results, bounding_box_tensor, confidence_threshold);
process_return(r, "bounding_box_tensor_to_objects");
if (r == DenkflowResult::Ok) {
for (int i = 0; i < bounding_box_results->bounding_boxes_length; i++) {
std::cout
<< "Box "
<< i
<< " ["
<< bounding_box_results->bounding_boxes[i].class_label.name
<< "]: "
<< bounding_box_results->bounding_boxes[i].confidence
<< std::endl;
}
}
// --- Free Allocated Objects ---
r = free_all_objects();
process_return(r, "free_all_objects");