Skip to content

sample scripts and snippets to show developers how they could solve specific problems when creating configurators

License

Notifications You must be signed in to change notification settings

NVIDIA-Omniverse/configurator-samples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Configurator Samples

This repo is housing sample scripts and snippets to show developers how they could solve specific problems when creating configurators.
It is not meant as complete solutions (if it works for you out of the box - great), but more like a source of inspiration for your own solutions.

Refer to the End-to-End Configurator Example Guide for instructions on creating a configurator from start to finish.

Available scripts and snippets:

  • Run Variants (cache/run_variants.py) - Finds all variants and runs through them. Waits the stage to be fully ready before each new one is set. If a caching graph is found (OmniGraph prim with name CacheGeneration), a signal (GenerateCache) is emitted for the graph to receive and to run through the authored options.
  • Create Variant Json Data (cache/create_variant_json_data.py) - Creates a json file that contains all the variant sets and variants in the configurator stage. This file can then be edited and variants removed and used as input to the generate and validate cache functions. This will minimize the cache size and speed up iteration if you are dealing with assets that contain more variants than used in your configurator project.
  • Copy Configurator (cache/copy_configurator.py) - Copy the configurator to another folder on the local disk. This is done to test the UJITSO cache from a different locations from where it was authored.
  • Validate Log (cache/validate_log.py) - Find UJITSO errors in log file.
  • Generate & Validate Cache (cache/generate_validate_cache.bat) - This batch script uses the scripts above to fully automate cache generation and cache validation.
  • Optimize File (optimize_file.py) - Conform a DELTAGEN export to Omniverse best practices
  • Visibility Switches (visibility_switches.py) - Modify the switch variant functionality from DELTAGEN exports to visibility toggles
  • CSV Material Replacements (csv_material_replacements.py) - A data driven material replacement workflow
  • CSV Material Variants (csv_material_variants.py) - A data driven material variant creation workflow
  • CSV to Json (csv_to_json.py) - Option packages data generation
  • Reference Variants to Visibility (reference_variants_to_visibility.py) - Change variants that swap reference path to visibility switch
  • Switch Variant (switch_variant.py) - Create visibility variants for each "switch variant"
  • Resize Textures (resize_textures.py) - Resize images on hard drive
  • Enable Streaming Extensions (enable_streaming_extensions.py) - snippet to show how to load extensions via Python
  • Picking Mode (picking_mode.py) - snippet to set what is selectable in viewport

Scripts

Scripts containing a bit more involved suggestions on how to solve a particular workflow problem or how to make something more efficient or better performing.

Generate & Validate Cache

Generate and validate UJITSO cache for your asset running a single batch script
(scripts/cache/generate_validate_cache.bat)
Uses the scripts below to (1) generate the cache, (2) copy the configurator to a different location, (3) run all the options with flags to log any UJITSO cache issues, and (4) parse the log and print out any issues.

To generate the cache, the kit app is ran with the following UJITSO flags and by running all the variants in the file after it has been loaded:

  • --/UJITSO/datastore/localCachePath="{path_to_cache_directory}"
  • --/UJITSO/writeCacheWithAssetRoot="{path_to_source_directory}"

After the configurator directory has been copied elsewhere, the log get's generated by running the kit app with the following UJITSO flags (these can be found in the generate_validate_cache.bat file), and by running all the variants in the file after it has been loaded:

  • --/UJITSO/failedDepLoadingLogging=true
  • --/UJITSO/readCacheWithAssetRoot={path_to_the_copy_root_dir}
  • --/UJITSO/datastore/localCachePath={path_to_the_copy_cache_dir}

Details can be seen by editing the scripts/cache/generate_validate_cache.bat file.

Generate Variant Data

Write all variants in the stage to a json file.
(scripts/cache/create_variant_json_data.py)
Finds all variants and writes them to a json file. You can then edit this file and remove variant sets, variants, and/or entire prim entries. This lets you author which options get set and subsequently cached out. You may have a lot of variants that are not in use. Just make sure to use the variant data json file as input for both caching and validation which is the default in the batch file (if the variant data file does not exist the run variants script will skip the variant data input).

Run Variants

Run all variants in a stage awaiting the stage to be ready between each variant being set.
(scripts/cache/run_variants.py)
Finds all variants and runs through them. Waits the stage to be ready before each new one is set. If a caching graph is found (OmniGraph prim with name CacheGeneration), a signal (GenerateCache) is emitted for the graph to receive and to run through the authored options. This lets you author which options get ran and cached out. You may have a lot of variants that are not in use. Another mechanism to minimize the cache size if you are dealing with a lot of unused variants in the variant data input file that can be generated with the script above, edited, and used as an input here.

Copy Configurator

Copy the configurator to another folder on the local disk.
(scripts/cache/copy_configurator.py)
Copy configurator from one local folder to another folder. Provide collected top level file and the root directory that you want to copy the files to. Optionally, you can provide an --overwrite flag to overwrite the target root directory if it exists.

Validate Log

Find UJITSO errors in log file.
(scripts/cache/validate_log.py)
Provide the log path and the script will dig out any UJITSO Errors from the log file.

Optimize File

Conform a DELTAGEN export to Omniverse best practices
(scripts/deltagen/optimize_file.py)
Execute this script on the top-level USD (the script will modify all dependencies in place).
The USD files exported from DELTAGEN will be conformed to OV standards by reparenting all layers under "/World" root primitive, updating all asset paths to UNIX format and setting root primitive as default.

Visibility Switches

Modify the switch variant functionality from DELTAGEN exports to visibility toggles
(scripts/deltagen/visibility_switches.py)
This script will modify the behavior of switch variants from DELTAGEN USD exports so that they become visibility toggles. Execute this script directly on an opened stage that contains switch variants. For DELTAGEN exports, this is the 'model' export that contains geometry.

CSV Material Replacements

Data driven material replacement workflow
(scripts/csv_material_replacements.py)
Replace materials with other materials in a repeatable way. The use case this script was created for was to replace USD Preview Surface materials coming out of DELTAGEN with mdl materials from the Omniverse Automotive library.

NOTE: You could export mdl materials that are custom built for your specific setup and replace with that library.

Example Table:

source target new_instance modifications material_name
/World/Mustang_Stellar_materials/rubber_black_semigloss https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Tires/Pristine_Tire_Rubber_Clean.mdl {"inputs:diffuse_reflection_color":(0.2, 0.2, 0.2)}
/World/Mustang_Stellar_materials/V_carpaint https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Carpaint/Carpaint_05.mdl {"inputs:enable_flakes":0} Carpaint_Body
/World/Mustang_Stellar_materials/white_rubber https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Tires/Pristine_Tire_Rubber_Clean.mdl TRUE {"inputs:enable_layer_1":1}

Code Reference

Write - def write(csv_file_path: str) -> None:

Writes a csv file with all the material prim paths in the open stage. If the file already exists, only materials that do not already exist in the file will be added and any replacements and modifications would be retained.

TIP: Because of the function's additive nature, you can build a project wide material replacement csv file that can be applied to many stages.

Create Material Library - def create_material_library(csv_file_path: str) -> None:

Parses the csv file and creates the materials that will be used to replace the original materials that are written out in the first step.
This is useful, because it gives us a clean material USD file that we can also run the variant creation on. This can be sub layered into a file, and then we can run the material replacement function, that will now use the existing materials instead of also creating the materials, which is another option.

Read csv file and create materials under the MATERIAL_ROOT_PATH that you will find at the top of the script file.
Defaults to MATERIAL_ROOT_PATH = '/World/Looks'
Optionally, it will create a new shader, even if the shader already exists (otherwise it will automatically re-use)
Example from the csv "new_instance" column - TRUE

Optionally, it will also apply modifications to the new shaders if a dict with property name and values are encoded.
Example from the csv "modifications" column - {"inputs:coat_color":(0.0, 0.0, 0.0), "inputs:enable_flakes":0}

NOTE: make sure that the dictionary is encoded correctly - the string is evaluated into a dictionary and will report an error if syntax is incorrect.
TIP: If you are going to create variants for certain properties on the shader, avoid creating a local opinion on that property at this step.

Optionally, you can also provide a new name for the shader. There is no need to flag "new_instance" when you do this the first time, only if you want to keep the same provided base name and create a new instance for modification reasons.
Example from the csv "shader_name" column - Glass_Reflectors_Base_Dark_Yellow

TIP: Make a new sub-layer and set it to "current authoring layer" before running this script. This will put the material replacements and the materials in a new usd file that you can add in at any point in your project setup.

Read Replace - def read_replace(csv_file_path: str) -> None:

Reads the csv file and replace all materials in the current stage if they have a replacement encoded.
If the replacement material does not exist in the current stage, it will be created.
Optionally, it will create a new shader, even if the shader already exists (otherwise it will automatically re-use)
Example from the csv "new_instance" column - TRUE

Optionally, it will also apply modifications to the new shaders if a dict with property name and values are encoded.
Example from the csv "modifications" column - {"inputs:coat_color":(0.0, 0.0, 0.0), "inputs:enable_flakes":0}

NOTE: make sure that the dictionary is encoded correctly - the string is evaluated into a dictionary and will report an error if syntax is incorrect.

Optionally, you can also provide a new name for the shader. There is no need to flag "new_instance" when you do this the first time, only if you want to keep the same provided base name and create a new instance for modification reasons. Example from the csv "shader_name" column - Glass_Reflectors_Base_Dark_Yellow

TIP: Make a new sub-layer and set it to "current authoring layer" before running this script. This will put the material replacements in a new usd file that you can add in at any point in your project setup. If you created a material library, you can add that in as a sub layer before running this and the materials from that sub layer will be used.

CSV Material Variants

Data driven material variant creation
(scripts/csv_material_variants.py)
A script that can create material variants from csv data.

Code Reference

Create Variants - def create_variants(csv_file_path: str) -> None:

Read csv file and create variants.
Example Table:

source target new_instance modifications material_name
/World/Mustang_Stellar_materials/rubber_black_semigloss https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Tires/Pristine_Tire_Rubber_Clean.mdl {"inputs:diffuse_reflection_color":(0.2, 0.2, 0.2)}
/World/Mustang_Stellar_materials/V_carpaint https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Carpaint/Carpaint_05.mdl {"inputs:enable_flakes":0} Carpaint_Body
/World/Mustang_Stellar_materials/white_rubber https://omniverse-content-production.s3.us-west-2.amazonaws.com/Materials/2023_2_1/Automotive/Pristine/Tires/Pristine_Tire_Rubber_Clean.mdl TRUE {"inputs:enable_layer_1":1}

The csv file is encoded like this (with example data):
column:material_prim_path - value:/World/Looks/Carpaint_05 - The path to the material to add the variant to.
column:variant_name - value:black - The variant set name to add.
column:variant_values - {"inputs:diffuse_reflection_color":(0.023102053, 0.023102075, 0.023102283), "inputs:coat_color":(0, 0, 0)} - The variant data to encode in the form of a dictionary.

NOTE: make sure that the dictionary is encoded correctly - the string is evaluated into a dictionary and will report an error if syntax is incorrect.

CSV to Json

Package info generation
(scripts/csv_to_json.py)
A script that will convert csv files containing configurator package information to json for use by a React application. The csv files define what data is needed and how it needs to be structured in order to be converted to a usable json format. This script requires two csv files - an "Options" csv and a "Packages" csv.

Options CSV

Contains every possible option from the stage. These options need to be referenced by the Packages csv.
Here are the rules for how this file is structured:

  1. The first row is a header, and therefore shouldn’t contain any options. Although the columns have distinct meanings, what’s entered into this row won't impact the resulting json output.
  2. Column 1 should contain a unique ID for the option (NOTE: each row represents an option).
  3. Column 2 should contain a prim path
  4. Column 3 should contain a variant set. This variant set should exist on the prim from column 2.
  5. Column 4 should optionally contain a label for the variant set. If a label isn’t needed, this column still needs to exist, but the values can be empty.
  6. Column 5 should contain a variant. This variant should be an available option on the variant set from column 3.
  7. Column 6 should optionally contain a label for the variant. If a label isn’t needed, this column still needs to exist, but the values can be empty.
  8. Column 7 should contain an optional graphic.
  9. Column 8 should contain an optional event

Example Table:

id Prim Path Variant Set Option / Category (Optional) Variant Value Display Name (Optional) Graphic (Optional) Event (Optional)
0 /World/Prim Color Color Red Red
1 /World/Prim Color Color Green Green
2 /World/Prim Color Color Blue Blue
3 /World/Prim Mesh Mesh Cube Cube
4 /World/Prim Mesh Mesh Sphere Sphere
5 /World/Prim Mesh Mesh Cylinder Cylinder

Packages CSV

Contains groupings of one or more option references (from the Options csv file), which together forms a 'package'. A name for the package is required, followed by an arbitrary number of Ids from the Options csv to associate with that package. Here are the rules for how this file is structured:

  1. The first row is a header and shouldn’t contain any options. Although the columns have distinct meanings, what’s entered into this row won't impact the resulting json output.
  2. Column 1 should contain a unique ID for the Package
  3. Column 2 should contain a unique name for the Package.
  4. Columns 3 and onward should contain an Id from the Options csv. Any number of columns may be added.

Example Table:

Id Name Color Mesh
0 Red Cube 0 3
1 Green Sphere 1 4
2 Blue Cylinder 2 5

Code Reference

Create Json - def create_json(options_csv: str, packages_csv: str, output_path: str) -> None:

Creates and writes to disk the json containing package information for use by a React application.

Get Packages Json - def get_packages_json(package_info: dict, include_id: bool = False) -> dict:

Creates the data that's used for the packages JSON file

Get Packages with Options - def get_packages_with_options(options: dict, packages: dict) -> dict:

Returns a dictionary that combines Options csv data with Packages csv data

Get Raw Packages - def get_raw_packages(packages_csv_path: str) -> dict:

Returns a dictionary containing the values from a Packages csv

Get Raw Options - def get_raw_options(options_csv_path: str) -> dict:

Returns a dictionary containing the values from an Options csv

Reference Variants to Visibility

Change variants that swap reference path to visibility switch
(scripts/reference_variants_to_visibility.py)
Find all variants that add a payload or reference to a prim and move them directly onto a new child prim.
It then modifies the variant to toggle on the visibility of the prim that holds the reference, and visibility off on the prims that hold the remaining references.
Optionally allows to convert payloads to references, or references to payloads.

Code Reference

Convert - def convert(convert_payloads_to_refs: bool=False, convert_references_to_payloads: bool=False) -> None:

Script's entry point. Accepts two Booleans for converting payloads to references and references to payloads.

Move References - def _move_references(layer: Sdf.Layer, prim: Sdf.PrimSpec) -> None:

Creates child prims for each variant and moves references & payloads to them.

Move References Iter - def _move_references_iter(layer: Sdf.Layer, vSpec: Sdf.PrimSpec) -> None:

Contains the main logic for performing the moving references and payloads to a new prim.

Set Visibility - def _set_visibility() -> None:

Applies the visibility toggling behavior for variants.

Switch Variant

Create visibility variants for each switch variant
(scripts/switch_variant.py)
This module will build switch variants using existing prims in a stage. A "switch variant" is a term used in certain apps such as DELTAGEN, which are variants that are used for toggling on the visibility of a prim within a set and toggling off the visibility of the other prims in the set.
When executed, a new variant set will be added to all prims that have matching names with the provided switch_prims list.
The variant set will contain one visibility toggle variant for each child prim.

Code Reference

Create - def create(switch_prims: List[str] = ["Switch"], new_variant_set: str = "switchVariant") -> None:

Script's entry point. Receives a list of prim names to perform this operation against and a variant set name to be created on those prims.

Resize Textures

Resize images on hard drive
(scripts/resize_textures.py)
Resize textures sizes from hard drive. Made to act on a Configurator Published folder on your local hard drive to optimize configurator performance before packaging it for GDN.

Code Reference

Get Files Of Type - def get_files_of_type(root_folder: str, file_extension: str) -> List[str]:

At the bottom of the script, modify the target_root directory to the directory you want the script to act on. The files list uses a function that will find all files of a type - the default is png files.

Print Info - def print_info(image_files: List[str], max_size: int = 2048, include_square: bool = True, include_non_square: bool = True, inform_single_color_images: bool = False, single_color_image_max_size: int = 128):

Scout function to console print all files found larger than the passed in max size argument and files with a higher bit depth than 8.
(Optional) Filter out square or non square images.
(Optional) Detect and report single color images over the passed in size (defaults to 128).

Down Res - def down_res(image_files: List[str], max_size: int = 1024, enforce_8_bit_depth: bool = False, include_square: bool = True, include_non_square: bool = True, enforce_single_color_image_size: bool = False, single_color_image_max_size: int = 128):

Down res files to the passed in max size integer.
(Optional) Enforce 8 bit images.
(Optional) Filter out square or non square images.
(Optional) Detect and down res single color images.

Snippets

Bite sized code snippets to show you how to achieve a specific task.

Enable Streaming Extensions

(snippets/enable_streaming_extensions.py)
Demonstrates how to load the streaming extensions via Python. Just change the extension list to modify this to load any extensions you like via Python.

import omni.kit.app

manager = omni.kit.app.get_app().get_extension_manager()
extensions = ['omni.kit.livestream.messaging', 'omni.kit.livestream.webrtc.setup', 'omni.services.streamclient.webrtc']
for extension in extensions:
    manager.set_extension_enabled_immediate(extension, True)

Picking Mode

(snippets/picking_mode.py)
Demonstrates how to set what is selectable via mouse click within a kit viewport. When executed, only the prims associated with the supplied prim type(s) will be selectable via mouse click in the viewport.

import carb

ALL = "type:ALL"
MESH = "type:Mesh"
CAMERA = "type:Camera" 
LIGHTS = "type:CylinderLight;type:DiskLight;type:DistantLight;type:DomeLight;type:GeometryLight;type:Light;type:RectLight;type:SphereLight"

if __name__ == "__main__":
    # change MESH to ALL, CAMERA or LIGHTS for those prim types to be selectable
    carb.settings.get_settings().set("/persistent/app/viewport/pickingMode", MESH)

About

sample scripts and snippets to show developers how they could solve specific problems when creating configurators

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages