Skip to main content
Version: 0.6.x

Core Concepts

  • Pipeline - The main object representing a data processing graph. You can load a pre-built graph from a .denkmodel file or construct one manually. A pipeline needs to be initialized before it can be run. A pipeline can only be modified before it has been initialized.
  • Node - A processing unit within the pipeline which executes a certain task, e.g., image resizing, object detection or optical character recognition (OCR).
  • Port - Nodes have input and output ports that they use to send and receive data.
  • Tensor - All data packets that are passed between nodes are called tensors. Tensors can have different types, e.g., ImageTensor, BoundingBoxTensor or OcrTensor. Many tensors have a function called to_objects, which will convert the tensor data into a readable datatype. Note that an ImageTensor can contain multiple images in a so-called batch.
  • Topic - Topics establish the data routing relationships between output ports (producers) and input ports (consumers). A topic can be thought of as a name for the connection between an output port of one node and an input port of another node. Each port can only be associated with one topic at once. Each topic can only be linked to exactly one output port, but it can be linked to multiple input ports. In practice this means that each output port can deliver its data to only one topic, but this data can be sent to multiple different input ports. Since there is a one-to-one correspondence between output ports and topics, the topics are usually named by the output that produces them using the convention node_name/output_name. Topics can have an arbitrary name, but they must always include a slash.
  • Publishing - Putting data into a topic is called publishing. This is what output ports do, but you can also manually publish data to a topic, as long as this topic is not already linked to an existing output port. This is commonly used to manually provide images to a pipeline.
  • Subscribing / Receiving - If you want to receive the data that is published to a topic, you need to subscribe to it. The action of subscribing returns a receiver. A receiver provides a blocking action that waits for data to appear on the topic. This is usually done to receive the results of a pipleine. After calling run on a pipeline, you can use a receiver to wait for the results.
  • Default Topic Names - Every pre-made pipeline has a topic called camera/input. Images are given to the pipeline by publishing an image tensor to this topic. To receive the results of the pipeline, you need to subscribe to the output topic that you are interested in. On an initialized pipeline there are the methods get_topics_for_pipeline_input and get_topics_for_pipeline_output, which will return the names of the topics that can be used for data input and output.

Model Formats: .denkflow vs .denkmodel

The DENKflow API uses two primary model file types obtained from the DENKweit Vision AI Hub:

  1. .denkflow Files (Complete Pipelines):

    • These contain entire, pre-configured processing graphs.
    • They are exported from the "Exports" tab of your model on the Vision AI Hub.
    • Loaded using Pipeline.from_denkflow(...).
    • This is currently the only method for using quantized models exported from the Hub.
  2. .denkmodel Files (Individual AI Models):

    • These contain single AI models (e.g., for object detection, classification or OCR) without the surrounding pipeline structure.
    • They are downloaded directly from the "Network Details" page of your model on the Vision AI Hub.
    • Used when building custom pipelines by adding specific AI nodes (e.g., pipeline.add_object_detection_node(...)).
    • Currently, .denkmodel files downloaded this way are not quantized. Support for downloading single quantized models will be added later.
    • For a full example on how to build your own pipeline using .denkmodel files, see here