NVIDIA Isaac Sim™
Synthetic data

Capture synthetic data with virtual cameras

Synthetic data can be used to support path planning, track objects, and for visualization purposes in robotic applications.

In order to capture synthetic data, e.g. localization and geometric information of work pieces, the Wandelbots NOVA extension API enables you to capture and extract object data from NVIDIA Isaac Sim.

Supported data types

  • Color image
  • Image normals
  • Depth
  • Point cloud
  • Bounding boxes
  • Segmentation data

Prerequisites

  1. Make sure that the NVIDIA Isaac Sim extension is installed and connected to the RobotPad.
    If the scene is not automatically displayed, click on the 3D view icon and select the NVIDIA Omniverse icon.
  2. Open the Wandelbots NOVA extension API (Omniservice API) by clicking on the Wandelbots NOVA tab in the NVIDIA Omniverse menu bar.
    You'll need to indicate object prim paths for almost every API call. You'll find the prim path in the NVIDIA Isaac Sim scene properties.

Configure camera

  1. You need to make sure that the scene you're using has a camera object:
    Use Periphery(Camera) > GET /omniservice/api/v2/periphery/cameras to get an overview on all configured cameras in the scene.

Some camera manufacturers, e.g. Zivid, provide an SDK that can be used to retrieve the required camera matrix. Alternatively, use the camera's specification sheet or calibration methods to adjust the camera parameters to the desired values.

Define semantic labels

Once the camera is configured, you have to tell the camera which objects in the scene are of interest by assigning semantic labels. When capturing point clouds, bounding boxes and segmentation data, only data which is assigned a label will be captured.

  1. Use Prims > GET /omniservice/api/v2/prims/labels to get an overview of all existing labels in the scene.
  2. To add a new label, use Prims > PUT /omniservice/api/v2/prims/labels.

Make sure that the labels are correctly assigned to the objects in the scene. Otherwise, the captured data might return empty. Read more on semantic labelling (opens in a new tab).

Example usage

With the expose capture endpoints it is possible to fetch data directly in your robot code.

import asyncio
 
from wandelbots_isaacsim_api import (
    ApiClient,
    Configuration,
    PeripheryCameraApi,
)
 
 
async def main():
    async with ApiClient(
        Configuration(host="http://localhost:8011/omniservice/api/v2")
    ) as api_client:
        camera_api = PeripheryCameraApi(api_client)
 
        print(await camera_api.list_camera_prims())
 
        response = await camera_api.capture_boundingbox2d(
            result_type="json", camera_prim_path="/World/Camera", width=400, height=400
        )
 
        print([bounding_box.prim_path for bounding_box in response])
 
 
if __name__ == "__main__":
    asyncio.run(main())

Example output

['/World/Camera', '/OmniverseKit_Persp', '/OmniverseKit_Front', '/OmniverseKit_Top', '/OmniverseKit_Right']
['/World/Sphere', '/World/Cube']