Skip to main content
Version: 0.7.x [Latest Alpha]

Configuration Options

The Pipeline.from_denkflow(file_name, **kwargs) method is the primary way to load pre-built .denkflow pipelines. Here are several configuration options for different scenarios:

from denkflow import Pipeline, HubLicenseSource, OneTimeLicenseSource

# Common parameters (replace with your actual values)
denkflow_file_path = "path/to/your_model.denkflow"
your_pat = "YOUR-PERSONAL-ACCESS-TOKEN"
your_license_id = "YOUR-LICENSE-ID" # Optional
custom_hub_endpoint = "https://your.custom.hub.endpoint" # Optional

# 1. Basic: Using PAT only (Recommended for simplicity)
# DENKflow uses the PAT to handle licensing, typically creating a HubLicenseSource internally.
pipeline_pat_only = Pipeline.from_denkflow(denkflow_file_path, pat=your_pat)
print("Pipeline loaded using PAT only.")

# 2. PAT with a specific License ID
# Useful if your PAT has access to multiple licenses and you need to select one.
pipeline_pat_license_id = Pipeline.from_denkflow(
denkflow_file_path,
pat=your_pat,
license_id=your_license_id
)
print(f"Pipeline loaded using PAT and License ID: {your_license_id}")

# 3. PAT with a custom Hub endpoint
# For testing instances of the DENKweit Vision AI Hub.
pipeline_pat_custom_endpoint = Pipeline.from_denkflow(
denkflow_file_path,
pat=your_pat,
endpoint=custom_hub_endpoint
)
print(f"Pipeline loaded using PAT and custom endpoint: {custom_hub_endpoint}")

# 4. Using a pre-configured HubLicenseSource
# Provides more control over license source creation.
hub_license_source = HubLicenseSource.from_pat(pat=your_pat, license_id=your_license_id)
pipeline_hub_ls = Pipeline.from_denkflow(
denkflow_file_path,
license_source=hub_license_source
)
print("Pipeline loaded using a pre-configured HubLicenseSource.")

# 5. Using a pre-configured OneTimeLicenseSource
initial_hub_src = HubLicenseSource.from_pat(pat=your_pat, license_id=your_license_id)
one_time_license_source = initial_hub_src.to_one_time_license_source()
pipeline_one_time_ls = Pipeline.from_denkflow(
denkflow_file_path,
license_source=one_time_license_source
)
print("Pipeline loaded using a pre-configured OneTimeLicenseSource.")

# 6. PAT with `one_time_registration=True` (Enables offline use after first run)
pipeline_one_time_reg = Pipeline.from_denkflow(
denkflow_file_path,
pat=your_pat,
one_time_registration=True
)
print("Pipeline loaded using PAT with one_time_registration=True.")

Parameter Reference

The Pipeline.from_denkflow method accepts the following key parameters:

  • file_name: str (Required): The path to your .denkflow model file.
  • pat: Optional[str] = None: Your Personal Access Token from the Vision AI Hub.
  • license_source: Optional[HubLicenseSource | OneTimeLicenseSource] = None: A pre-configured license source object.
  • license_id: Optional[str] = None: When pat is used for licensing, this optionally specifies a license_id.
  • endpoint: Optional[str] = None: Custom URL for the Vision AI Hub endpoint.
  • one_time_registration: bool = False: When True, enables offline use after the first successful run.

Important Notes:

  • For any licensing method involving offline persistence, ensure the DENKFLOW_DATA_DIRECTORY environment variable is correctly set to a persistent location.
  • If both pat and license_source are provided, the license_source takes precedence.

Thread Configuration

You can configure the number of threads used by ONNX Runtime for model inference. This allows you to optimize performance based on your hardware and workload.

from denkflow import Pipeline

# Create a pipeline
pipeline = Pipeline.from_denkflow("path/to/model.denkflow", pat="your_pat")

# Configure thread counts (must be done before initialize())
pipeline.set_intra_threads(8) # Threads for parallelism within operators
pipeline.set_inter_threads(2) # Threads for parallelism between operators

# Initialize the pipeline
pipeline.initialize()

Thread Configuration Methods

  • set_intra_threads(intra_threads: int): Sets the number of intra-op threads. Controls parallelism within individual nodes/operators. Default is 4.
  • set_inter_threads(inter_threads: int): Sets the number of inter-op threads. Controls parallelism between independent nodes/operators. Default is 4.

Note: These methods must be called before pipeline.initialize(). Calling them after initialization will raise a RuntimeError.

Performance Tuning Tips

  • Intra-op threads: Higher values (8-16) can improve performance for compute-intensive operations on multi-core CPUs.
  • Inter-op threads: Lower values (2-4) are typically sufficient, as most pipelines have sequential dependencies.
  • Start with the defaults (4/4) and adjust based on profiling results.