Questions?


Get in touch with us today and our team of imaging professionals will be pleased to assist you.

Contact Us

eSDK Pro How-To Guides

This guide contains a curated set of short, focused examples designed to help you get productive quickly with eSDK Pro. Each example builds upon the previous one to demonstrate common eSDK Pro development workflows—from discovering and connecting cameras to building complete capture pipelines with custom plugins, NVENC compression, and error-handling logic.

In this section, you’ll learn how to:

  • Discover and connect cameras to your system
  • Control camera parameters and acquisition settings
  • Configure triggering and synchronization modes
  • Build and manage pipelines with plugins and data transfers
  • Save and compress captured output
  • Handle errors and adjust diagnostic logging

You can use these examples as practical templates for integrating eSDK Pro into your own software projects. The code shown in each section is ready to compile and run, illustrating the correct use of eSDK Pro objects, function calls, and data structures. Notes following each code block explain context, dependencies, and hardware requirements.

Discovering and Connecting Cameras

Before building pipelines, first discover and connect available cameras on the server. This is the starting point for all applications that use eSDK Pro.

// Discover available cameras on the server
auto cameras = server.DiscoverCameras();

// Add discovered cameras to the server
for (auto& camInfo : cameras) {
  Camera cam = server.AddCamera(camInfo);

  // Example: read the user-defined name
  auto username = cam.GetParameter<StringCameraParam>("DeviceUserName").GetValue();
  std::cout << "Camera: " << username << std::endl;
}

Expected output: Displays user-defined camera names in the console.

Notes:

  • DiscoverCameras() and AddCamera() are eSDK Pro API calls.
  • "DeviceUserName" is a camera parameter.
  • For remote servers, ensure eCaptureProServer is running and firewall rules allow the eSDK port.
  • Always remove cameras before destroying the server.

Controlling Camera Parameters

After connecting a camera, adjust parameters such as exposure time, gain, or pixel format directly through the eSDK Pro API. Parameter control is fundamental to optimizing image brightness, quality, and acquisition timing.

// Get and set the exposure parameter (in microseconds)
UInt32CameraParam exposure = cam.GetParameter<UInt32CameraParam>("Exposure");
uint32_t current = exposure.GetValue();
exposure.SetValue(current + 500);  // increase exposure by 500 µs

Expected output: Applies the new exposure time immediately.

Notes:

  • GetParameter() and SetValue() are eSDK Pro API calls; "Exposure" is a camera parameter.
  • Parameter names and values are model-specific; check Camera Features documentation.

Configuring Triggering and Synchronization

Cameras can be triggered by software, external signals, or synchronized clocks.
The examples below show how to configure each trigger method—from software-triggered single frames to full multi-camera synchronization using PTP.

Using a Software Trigger

Use a software trigger to capture frames on demand without external input—ideal for testing or manual capture workflows.

// Enable trigger mode
EnumCameraParam triggerMode = cam.GetParameter<EnumCameraParam>("TriggerMode");
triggerMode.SetValue("On");

// Use software trigger source
EnumCameraParam triggerSource = cam.GetParameter<EnumCameraParam>("TriggerSource");
triggerSource.SetValue("Software");

// Fire a software trigger via command parameter
CommandCameraParam swTrig = cam.GetParameter<CommandCameraParam>("TriggerSoftware");
swTrig.Execute();

Expected output: Captures a frame each time the software trigger executes.

Notes:

  • All parameter names (TriggerMode, TriggerSource, TriggerSoftware) are camera parameters.
  • Calls shown are eSDK Pro API calls.
  • Values supported depend on the camera model.

Configuring an External Hardware Trigger

Use an external hardware trigger when frame capture depends on a physical signal, such as a sensor, motion detector, or strobe pulse.

// Enable trigger mode
EnumCameraParam triggerMode = cam.GetParameter<EnumCameraParam>("TriggerMode");
triggerMode.SetValue("On");

// Use GPI pin as the trigger source
EnumCameraParam triggerSource = cam.GetParameter<EnumCameraParam>("TriggerSource");
triggerSource.SetValue("GPI_4");  // example pin

// Configure trigger activation (signal edge)
EnumCameraParam trigActivation = cam.GetParameter<EnumCameraParam>("TriggerActivation");
trigActivation.SetValue("Rising_Edge");

// Set acquisition mode for multiple frames if needed
EnumCameraParam acqMode = cam.GetParameter<EnumCameraParam>("AcquisitionMode");
acqMode.SetValue("MultiFrame");

// Set frame count
UInt32CameraParam frameCount = cam.GetParameter<UInt32CameraParam>("AcquisitionFrameCount");
frameCount.SetValue(1);

// Exposure is controlled internally
UInt32CameraParam exposure = cam.GetParameter<UInt32CameraParam>("Exposure");
exposure.SetValue(5000);  // 5 ms

Expected output: Captures frames when a rising-edge trigger signal is received on the configured GPI pin.

Notes:

  • All quoted strings are camera parameters; method calls are eSDK Pro.
  • Parameter names and values (such as GPI_4, Rising_Edge) are model-specific; confirm in GPIO and Acquisition Control docs.
  • With AcquisitionFrameCount = 1, the FrameRate parameter is ignored.
  • To synchronize multiple cameras via hardware triggers, apply the same configuration to each camera.

Synchronizing Multiple Cameras (PTP)

Use Precision Time Protocol (PTP) to synchronize multiple cameras across one or more servers with sub-microsecond accuracy—ideal for 3D imaging, robotics, or multi-angle analysis setups.

// Enable IEEE 1588 PTP (Two-Step mode)
EnumCameraParam ptpMode = cam.GetParameter<EnumCameraParam>("PtpMode");
ptpMode.SetValue("TwoStep");

// Enable PTP synchronization on the pipeline
pipeline.SetPtpSyncMode(true);

// Wait briefly for clocks to converge, then start pipeline
pipeline.Start();

Expected output: Runs multiple cameras in precise synchronization using the PTP clock.

Notes:

  • "PtpMode" is a camera parameter; calls shown are eSDK Pro.
  • Keep all devices on the same PTP domain.
  • Allow a short settle time before starting capture.
  • PTP must be enabled for both the cameras and the pipeline.

Building and Managing Pipelines

Pipelines define the data flow between camera, processing, and output tasks. In eSDK Pro, each pipeline connects modular tasks that can run on CPUs, GPUs, or across networked servers.The examples below show how to insert plugins, transfer data between devices, and manage pipeline operation.

Inserting a Plugin Task

Insert a plugin task to add a custom image-processing stage—such as GPU filtering or analysis—into the pipeline.

// Create pipeline with camera → plugin → raw saving
CameraTask camTask = pipeline.CreateCameraTask(cam);

// Create plugin by registered name; deploy to plugin directory first
PluginTask pluginTask = pipeline.CreatePluginTask(server, "CustomTask");

// Save processed frames
RawSavingTask rawTask = pipeline.CreateRawSavingTask(server, "/data/output/processed");

pipeline.ConnectTasks(camTask.GetOutput(), pluginTask.GetInput());
pipeline.ConnectTasks(pluginTask.GetOutput(), rawTask.GetInput());
pipeline.Start();

Expected output: Saves processed frames under /data/output/processed.

Notes:

  • All shown calls are eSDK Pro; "CustomTask" is the plugin’s registered name.
  • The plugin binary must match the server OS, runtime, eSDK version, and CUDA version if applicable.
  • Deploy the plugin library into the server plugin directory before running.
  • If the plugin exposes parameters, set them before Start().

Transferring Frames with FlexTrans (GPU-to-GPU)

Use FlexTrans to transfer frames directly between GPUs on the same server without host memory copies—ideal for high-throughput GPU pipelines.

// Example: GPU-to-GPU FlexTrans (Linux)
// eSDK: tasks must be pinned to different GPUs on the same server
srcTask.SetGpuDeviceId(0);
dstTask.SetGpuDeviceId(1);

pipeline.ConnectTasksFlexTrans(srcTask.GetOutput(), dstTask.GetInput());

Expected output: Transfers frames directly between GPUs without host copies.

Notes:

  • Linux-only in current eSDK builds.
  • Requires NVIDIA driver with GPUDirect enabled.
  • Non-camera tasks must be explicitly pinned to different GPU IDs.

Transferring Frames with FlexTrans (NIC-to-NIC)

Use FlexTrans over Rivermax NICs to move frames between servers without CPU involvement—ideal for distributed capture or processing systems.

// Example: NIC-to-NIC FlexTrans (Linux)
// eSDK: tasks must be created on different servers with Rivermax NICs
PluginTask srcTask = pipeline.CreatePluginTask(server01, "PluginName01");
PluginTask dstTask = pipeline.CreatePluginTask(server02, "PluginName02");

pipeline.ConnectTasksFlexTrans(
  srcTask.GetOutput(), "192.168.1.10",
  dstTask.GetInput(),  "192.168.1.11"
);

Expected output: Transfers frames between servers over Rivermax NICs without host copies.

Notes:

  • Linux-only in current eSDK builds.
  • Requires Rivermax runtime installed on both servers.
  • Use NIC IPs bound to Rivermax; ensure the firewall allows traffic.

Starting, Stopping, and Resetting Pipelines

Start, stop, and reset pipelines safely to manage their full lifecycle. This example shows how to start a running pipeline, let it execute briefly, and then stop and reset it to prepare for rebuilding or restarting.

// Start pipeline
pipeline.Start();

// Run for 5 seconds
std::this_thread::sleep_for(std::chrono::seconds(5));

// Stop and reset
pipeline.Stop();
pipeline.Reset();

Expected output: Starts the pipeline, runs it for a short period, then stops and resets it cleanly.

Notes:

  • Start, Stop, and Reset are eSDK Pro calls.
  • Modify pipelines only when stopped.
  • Reset before rebuilding tasks.

Saving and Compressing Output

Once frames are captured, you can either save them directly to disk or compress them in real time using NVENC for storage efficiency.
These examples show how to record both raw and compressed output using eSDK Pro pipeline tasks.

Saving Raw Frames

Save uncompressed frame data directly to disk for offline analysis or archival.

// Connect camera output to raw saving task
CameraTask camTask = pipeline.CreateCameraTask(cam);
RawSavingTask rawTask = pipeline.CreateRawSavingTask(server, "/data/output");

// Ensure directory exists and is writable
pipeline.ConnectTasks(camTask.GetOutput(), rawTask.GetInput());
pipeline.Start();

Expected output: Writes raw frame files to /data/output.

Notes:

  • Pipeline, CameraTask, RawSavingTask, ConnectTasks, and Start/Stop are eSDK Pro elements.
  • The output directory must exist and be writable by the server.
  • For a full working app, see the record_raw example.

Compressing with NVENC

Use NVIDIA’s NVENC hardware encoder to compress video in real time during capture—ideal for reducing storage size while maintaining full frame rate.

// Retrieve camera parameters needed for encoder
UInt32CameraParam widthParam       = cam.GetParameter<UInt32CameraParam>("Width");
UInt32CameraParam heightParam      = cam.GetParameter<UInt32CameraParam>("Height");
UInt32CameraParam framerateParam   = cam.GetParameter<UInt32CameraParam>("FrameRate");
EnumCameraParam   pixelFormatParam = cam.GetParameter<EnumCameraParam>("PixelFormat");

PIXEL_FORMAT pixelFormat = StringToPixelFormat(pixelFormatParam.GetValue());

// Create camera and NVENC tasks
CameraTask camTask = pipeline.CreateCameraTask(cam);
NvencTask nvencTask = pipeline.CreateNvencTask(
  server,
  0,  // GPU ID
  "/data/output",
  widthParam.GetValue(),
  heightParam.GetValue(),
  pixelFormat,
  framerateParam.GetValue(),
  params.m_codec,
  params.m_bitrateKbps
);

pipeline.ConnectTasks(camTask.GetOutput(), nvencTask.GetInput());
pipeline.Start();

Expected output: Saves an MP4 file containing NVENC-compressed video under /data/output.

Notes:

  • Width, Height, FrameRate, and PixelFormat are camera parameters; NVENC task creation is eSDK Pro.
  • Requires NVIDIA driver with NVENC enabled.
  • The camera must stream to GPU memory.
  • For a working example, see record_nvenc.

Error Handling and Diagnostics

Robust error handling helps maintain system stability and provides useful feedback during development and testing.
These examples show how to catch SDK exceptions and adjust log verbosity during debugging.

Catching eSDK Exceptions

Use try/catch blocks to handle SDK exceptions gracefully and reset pipelines after runtime errors.

try {
  pipeline.Start();  // eSDK: start pipeline; may throw ESdkProException
  // ... do work ...
  pipeline.Stop();   // eSDK: stop pipeline
}
catch (const ESdkProException& ex) {  // eSDK: catch SDK exception
  std::cerr << "[eSDK] " << ex.what() << "\n";  // log the error message

  // Optional cleanup if partially started:
  pipeline.Stop();   // safe if already stopped
  pipeline.Reset();  // clear tasks/connections before rebuild
}

Expected output: Logs an eSDK error message, then stops and resets the pipeline for a clean rebuild.

Notes:

  • ESdkProException is the eSDK Pro exception type.
  • If Start() throws, use the Stop → Reset → rebuild pattern before retrying.
  • Log the eSDK Pro version at startup to aid support.

Adjusting Log Level

Set the global log level to control how much diagnostic information the SDK outputs.

// eSDK Pro: set global log verbosity before building/starting the pipeline
system.SetLogLevel(LogLevel::Debug);  // Debug during diagnosis; Info/Error in production

Expected output: Displays more detailed log messages in the console or configured sink.

Notes:

  • SetLogLevel() is an eSDK Pro call.
  • Use Debug for troubleshooting; use Info or Error in production.

See Also

Updated on
November 4, 2025
Questions?


Get in touch with us today and our team of imaging professionals will be pleased to assist you.

Contact Us